[jira] [Commented] (SOLR-6770) Add/edit param sets and use them in Requests

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278785#comment-14278785
 ] 

ASF subversion and git services commented on SOLR-6770:
---

Commit 1652134 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1652134 ]

SOLR-6770 mask the useParams aftger expanding it

 Add/edit param sets and use them in Requests
 

 Key: SOLR-6770
 URL: https://issues.apache.org/jira/browse/SOLR-6770
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6770.patch, SOLR-6770.patch, SOLR-6770.patch


 Make it possible to define paramsets and use them directly in requests
 example
 {code}
 curl http://localhost:8983/solr/collection1/config/params -H 
 'Content-type:application/json'  -d '{
 set : {x: {
   a:A val,
   b: B val}
},
 set : {y: {
x:X val,
Y: Y val}
},
 update : {y: {
x:X val modified}
},
 delete : z
 }'
 #do a GET to view all the configured params
 curl http://localhost:8983/solr/collection1/config/params
 #or  GET with a specific name to get only one set of params
 curl http://localhost:8983/solr/collection1/config/params/x
 {code}
 This data will be stored in conf/params.json
 This is used requesttime and adding/editing params will not result in core 
 reload and it will have no impact on the performance 
 example usage http://localhost/solr/collection/select?useParams=x,y
 or it can be directly configured with a request handler as follows
 {code}
 requestHandler name=/dump1 class=DumpRequestHandler useParams=x/
 {code}
  {{useParams}} specified in request overrides the one specified in 
 {{requestHandler}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6184) BooleanScorer should better deal with sparse clauses

2015-01-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278783#comment-14278783
 ] 

Adrien Grand commented on LUCENE-6184:
--

The reason why Fuzzy1 and Fuzzy2 are faster too is that they rewrite to boolean 
queries by default, so this optimization helps them too.

 BooleanScorer should better deal with sparse clauses
 

 Key: LUCENE-6184
 URL: https://issues.apache.org/jira/browse/LUCENE-6184
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6184.patch


 The way that BooleanScorer works looks like this:
 {code}
 for each (window of 2048 docs) {
   for each (optional scorer) {
 scorer.score(window)
   }
 }
 {code}
 This is not efficient for very sparse clauses (doc freq much lower than 
 maxDoc/2048) since we keep on scoring windows of documents that do not match 
 anything. BooleanScorer2 currently performs better in those cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6770) Add/edit param sets and use them in Requests

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278869#comment-14278869
 ] 

ASF subversion and git services commented on SOLR-6770:
---

Commit 1652155 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652155 ]

SOLR-6770 accidentally commented out a test

 Add/edit param sets and use them in Requests
 

 Key: SOLR-6770
 URL: https://issues.apache.org/jira/browse/SOLR-6770
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6770.patch, SOLR-6770.patch, SOLR-6770.patch


 Make it possible to define paramsets and use them directly in requests
 example
 {code}
 curl http://localhost:8983/solr/collection1/config/params -H 
 'Content-type:application/json'  -d '{
 set : {x: {
   a:A val,
   b: B val}
},
 set : {y: {
x:X val,
Y: Y val}
},
 update : {y: {
x:X val modified}
},
 delete : z
 }'
 #do a GET to view all the configured params
 curl http://localhost:8983/solr/collection1/config/params
 #or  GET with a specific name to get only one set of params
 curl http://localhost:8983/solr/collection1/config/params/x
 {code}
 This data will be stored in conf/params.json
 This is used requesttime and adding/editing params will not result in core 
 reload and it will have no impact on the performance 
 example usage http://localhost/solr/collection/select?useParams=x,y
 or it can be directly configured with a request handler as follows
 {code}
 requestHandler name=/dump1 class=DumpRequestHandler useParams=x/
 {code}
  {{useParams}} specified in request overrides the one specified in 
 {{requestHandler}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6182) Spatial VisitorTemplate.visitScanned needn't be abstract

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278867#comment-14278867
 ] 

ASF subversion and git services commented on LUCENE-6182:
-

Commit 1652154 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1652154 ]

LUCENE-6182: spatial refactor VisitorTemplate.visitScanned needn't be abstract.
And have collectDocs specify BitSet not FixedBitSet.  (these are internal APIs)

 Spatial VisitorTemplate.visitScanned needn't be abstract
 

 Key: LUCENE-6182
 URL: https://issues.apache.org/jira/browse/LUCENE-6182
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Fix For: 5.x

 Attachments: LUCENE-6182.patch


 visitScanned can be implemented, allowing subclasses to specialize if desired.
 {code:java}
 protected void visitScanned(Cell cell) throws IOException {
   if (queryShape.relate(cell.getShape()).intersects()) {
 if (cell.isLeaf()) {
   visitLeaf(cell);
 } else {
   visit(cell);
 }
   }
 }
 {code}
 Then I can remove Intersect's impl, and remove the one prefix-tree faceting.
 Additionally, I noticed collectDocs(FixBitSet) can be improved to take BitSet 
 and call bitSet.or(docsEnum)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_25) - Build # 4418 - Still Failing!

2015-01-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4418/
Java: 64bit/jdk1.8.0_25 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:277)
at 
org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6184) BooleanScorer should better deal with sparse clauses

2015-01-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278939#comment-14278939
 ] 

Robert Muir commented on LUCENE-6184:
-

Right but with the PQ in place, we could consider adding support, maybe by 
adding 'min' to score() to specify the start of the range. maybe its also a way 
to remove this crazy logic the default impl:
{code}
int doc = scorer.docID();
if (doc  0) {
  doc = scorer.nextDoc();
}
{code}

 BooleanScorer should better deal with sparse clauses
 

 Key: LUCENE-6184
 URL: https://issues.apache.org/jira/browse/LUCENE-6184
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6184.patch, LUCENE-6184.patch


 The way that BooleanScorer works looks like this:
 {code}
 for each (window of 2048 docs) {
   for each (optional scorer) {
 scorer.score(window)
   }
 }
 {code}
 This is not efficient for very sparse clauses (doc freq much lower than 
 maxDoc/2048) since we keep on scoring windows of documents that do not match 
 anything. BooleanScorer2 currently performs better in those cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6984) Solr commitwithin is not happening for deletebyId

2015-01-15 Thread sriram vaithianathan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279017#comment-14279017
 ] 

sriram vaithianathan commented on SOLR-6984:


Thanks for the info Ishan

 Solr commitwithin is not happening for deletebyId
 -

 Key: SOLR-6984
 URL: https://issues.apache.org/jira/browse/SOLR-6984
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.6, Trunk
Reporter: sriram vaithianathan
 Fix For: 4.10.4, 5.0, Trunk

 Attachments: 4_10_3-SOLR-6984.patch, trunk-SOLR-6984.patch


 Hi All,
 Just found that solrj doesnot use commitwithin while using deletebyid. This 
 issue is discussed in 
 http://grokbase.com/t/lucene/solr-user/1275gkpntd/deletebyid-commitwithin-question
 Faced the same issue today and found that, in 
 org.apache.solr.client.solrj.request.UpdateRequest when new UpdateRequest is 
 created in getRoutes() method ( Line number 244 ), the setCommitWithin param 
 is not set in the urequest variable as it is done few lines above (Line 
 number 204) This causes setCommitWithin to revert to default value of -1 and 
 the commit does not happen. Tried setting that like,
 urequest.setCommitWithin(getCommitWithin()) and the feature is enabled from 
 SolrJ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5147) Support Block Join documents in DIH

2015-01-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278982#comment-14278982
 ] 

Noble Paul commented on SOLR-5147:
--

hi [~mkhludnev] Please let me know if this patch is final so that I can review 
and commit. I don't think it applies well to trunk 


 Support Block Join documents in DIH
 ---

 Key: SOLR-5147
 URL: https://issues.apache.org/jira/browse/SOLR-5147
 Project: Solr
  Issue Type: Sub-task
Reporter: Vadim Kirilchuk
Assignee: Noble Paul
 Fix For: 4.9, Trunk

 Attachments: SOLR-5147-5x.patch, SOLR-5147.patch, SOLR-5147.patch


 DIH should be able to index hierarchical documents, i.e. it should be able to 
 work with SolrInputDocuments#addChildDocument.
 There was patch in SOLR-3076: 
 https://issues.apache.org/jira/secure/attachment/12576960/dih-3076.patch
 But it is not uptodate and far from being complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6982) bin/solr and SolrCLI should support SSL-related Java System Properties

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279069#comment-14279069
 ] 

ASF subversion and git services commented on SOLR-6982:
---

Commit 1652210 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1652210 ]

SOLR-6982: remove bad search/replace issue

 bin/solr and SolrCLI should support SSL-related Java System Properties
 --

 Key: SOLR-6982
 URL: https://issues.apache.org/jira/browse/SOLR-6982
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6982.patch


 SolrCLI is used by bin/solr to create collections, run a healthcheck, and get 
 system info. If Solr is running in SSL mode, then these actions won't work 
 unless the proper SSL-related Java system properties are set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279068#comment-14279068
 ] 

Michael McCandless commented on LUCENE-6185:


Hmm one problem with TopDocs.merge is that it doesn't re-base the docIDs.  
Instead, it sets shardIndex for each hit.  I think this patch should sometimes 
fail tests, when newSearcher swaps in an ExecutorService?

TopDocs.merge does this so that you can merge across indices that sum to  2.1B 
docs.  But in this usage, the number of docs will be  2.1B ... so maybe we 
need an option to TopDocs.merge to rebase?  Or we rebase afterwards in 
IndexSearcher?

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6982) bin/solr and SolrCLI should support SSL-related Java System Properties

2015-01-15 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6982.
--
   Resolution: Fixed
Fix Version/s: Trunk

 bin/solr and SolrCLI should support SSL-related Java System Properties
 --

 Key: SOLR-6982
 URL: https://issues.apache.org/jira/browse/SOLR-6982
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0, Trunk

 Attachments: SOLR-6982.patch


 SolrCLI is used by bin/solr to create collections, run a healthcheck, and get 
 system info. If Solr is running in SSL mode, then these actions won't work 
 unless the proper SSL-related Java system properties are set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279104#comment-14279104
 ] 

Adrien Grand commented on LUCENE-6185:
--

I forgot to mention the patch applies to 5.x

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Error building PyLucene with added classes

2015-01-15 Thread Daniel Duma
Hi all,

I have added some classes that I need to Lucene and now I cannot build
PyLucene 4.9.

Everything runs fine inside Eclipse, but when copying the .java files to
the corresponding folders inside the PyLucene source directory and
rebuilding, I get this error:




*C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:
error: cannot find symbolsymbol: class QueryParser*Here is the full output:

ivy-configure:
[ivy:configure] :: Apache Ivy 2.4.0-rc1 - 20140315220245 ::
http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file =
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\ivy-settings.xml

resolve:

init:

-clover.disable:

-clover.load:

-clover.classpath:

-clover.setup:

clover:

compile-core:
[javac] Compiling 734 source files to
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\build\core\classes\java
[javac]
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:15:
error: cannot find symbol
[javac] MultiFieldQueryParser {
[javac] ^
[javac]   symbol: class MultiFieldQueryParser
[javac]
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:
error: cannot find symbol
[javac] public class FieldAgnosticQueryParser extends QueryParser {
[javac]   ^
[javac]   symbol: class QueryParser
[javac]
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:23:
error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^

BUILD FAILED
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:694: The
following error occurred while executing this line:
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:480: The
following error occurred while executing this line:
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:1755:
Compile failed; see the compiler error output for details.

PyLucene builds just fine without the added files, and I have checked and
the files it can't find are where they should be!

Cheers,
Daniel


[jira] [Created] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6185:


 Summary: Fix IndexSearcher with threads to not collect documents 
out of order
 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk


When created with an executor, IndexSearcher searches all leaves in a different 
task and eventually merges the results when all tasks are completed. However, 
this merging logic involves a TopFieldCollector which is collected 
out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279035#comment-14279035
 ] 

Adrien Grand commented on LUCENE-6185:
--

Here is the build failure link 
http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11448/

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6982) bin/solr and SolrCLI should support SSL-related Java System Properties

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279060#comment-14279060
 ] 

ASF subversion and git services commented on SOLR-6982:
---

Commit 1652208 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1652208 ]

SOLR-6982: bin/solr and SolrCLI should support SSL-related Java System 
Properties

 bin/solr and SolrCLI should support SSL-related Java System Properties
 --

 Key: SOLR-6982
 URL: https://issues.apache.org/jira/browse/SOLR-6982
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6982.patch


 SolrCLI is used by bin/solr to create collections, run a healthcheck, and get 
 system info. If Solr is running in SSL mode, then these actions won't work 
 unless the proper SSL-related Java system properties are set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Error building PyLucene with added classes

2015-01-15 Thread Andi Vajda

 On Jan 15, 2015, at 09:31, Daniel Duma danield...@gmail.com wrote:
 
 Update: never mind, I was placing the files in the wrong folder. Solved!

Good, that was going to be my first question since you didn't tell us anything 
about your new class(es).

The proper way to add things to lucene and pylucene is to put your stuff into 
your own package and to create a jar file from that package. Then, to add it to 
pylucene, you just add it to the list of jar files its build processes, with 
jcc's --jar parameter.

Andi..

 
 Thanks,
 Daniel
 
 On 15 January 2015 at 17:00, Daniel Duma danield...@gmail.com wrote:
 
 Hi all,
 
 I have added some classes that I need to Lucene and now I cannot build
 PyLucene 4.9.
 
 Everything runs fine inside Eclipse, but when copying the .java files to
 the corresponding folders inside the PyLucene source directory and
 rebuilding, I get this error:
 
 
 
 
 
 *C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:
 error: cannot find symbolsymbol: class QueryParser*Here is the full output:
 
 ivy-configure:
 [ivy:configure] :: Apache Ivy 2.4.0-rc1 - 20140315220245 ::
 http://ant.apache.org/ivy/ ::
 [ivy:configure] :: loading settings :: file =
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\ivy-settings.xml
 
 resolve:
 
 init:
 
 -clover.disable:
 
 -clover.load:
 
 -clover.classpath:
 
 -clover.setup:
 
 clover:
 
 compile-core:
[javac] Compiling 734 source files to
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\build\core\classes\java
[javac]
 
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:15:
 error: cannot find symbol
[javac] MultiFieldQueryParser {
[javac] ^
[javac]   symbol: class MultiFieldQueryParser
[javac]
 
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:
 error: cannot find symbol
[javac] public class FieldAgnosticQueryParser extends QueryParser {
[javac]   ^
[javac]   symbol: class QueryParser
[javac]
 
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:23:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
 
 BUILD FAILED
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:694: The
 following error occurred while executing this line:
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:480: The
 following error occurred while executing this line:
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:1755:
 Compile failed; see the compiler error output for details.
 
 PyLucene builds just fine without the added files, and I have checked and
 the files it can't find are where they should be!
 
 Cheers,
 Daniel
 


[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279047#comment-14279047
 ] 

Michael McCandless commented on LUCENE-6185:


bq. I think it should just use TopDocs.merge?

+1

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6185:
-
Attachment: LUCENE-6185.patch

Patch.

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6982) bin/solr and SolrCLI should support SSL-related Java System Properties

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279084#comment-14279084
 ] 

ASF subversion and git services commented on SOLR-6982:
---

Commit 1652217 from [~thelabdude] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1652217 ]

SOLR-6982: bin/solr and SolrCLI should support SSL-related Java System 
Properties

 bin/solr and SolrCLI should support SSL-related Java System Properties
 --

 Key: SOLR-6982
 URL: https://issues.apache.org/jira/browse/SOLR-6982
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6982.patch


 SolrCLI is used by bin/solr to create collections, run a healthcheck, and get 
 system info. If Solr is running in SSL mode, then these actions won't work 
 unless the proper SSL-related Java system properties are set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279105#comment-14279105
 ] 

Michael McCandless commented on LUCENE-6185:


bq. Doc ids are already rebased with the doc base of each reader context before 
the TopDocs.merge call so I think that would be fine? 

Aha!  I forgot about that, you're right.  So all is good.

bq.  And then when we call TopDocs.merge, we provide the top docs instances in 
the same order as leaves have been provided to IndexSearcher.search so 
tie-breaking by shard id has the same effect as tie-breaking by doc id?

Yes, this is important: earlier shard wins in TopDocs.merge.

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40-ea-b20) - Build # 11448 - Failure!

2015-01-15 Thread Adrien Grand
I opened https://issues.apache.org/jira/browse/LUCENE-6185

On Thu, Jan 15, 2015 at 5:44 PM, Adrien Grand jpou...@gmail.com wrote:
 I'm looking into it.

 On Thu, Jan 15, 2015 at 5:15 PM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11448/
 Java: 32bit/jdk1.8.0_40-ea-b20 -client -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:  org.apache.lucene.expressions.TestExpressionSorts.testQueries

 Error Message:
 Hit 24 docnumbers don't match Hits length1=231 length2=231 hit=0: doc0=1.0,  
 doc0=1.0 hit=1: doc1=1.0,  doc1=1.0 hit=2: doc2=1.0,  doc2=1.0 hit=3: 
 doc3=1.0,  doc3=1.0 hit=4: doc4=1.0,  doc4=1.0 hit=5: doc5=1.0,  doc5=1.0 
 hit=6: doc6=1.0,  doc6=1.0 hit=7: doc7=1.0,  doc7=1.0 hit=8: doc8=1.0,  
 doc8=1.0 hit=9: doc9=1.0,  doc9=1.0 hit=10: doc10=1.0,  doc10=1.0 hit=11: 
 doc11=1.0,  doc11=1.0 hit=12: doc12=1.0,  doc12=1.0 hit=13: doc13=1.0,  
 doc13=1.0 hit=14: doc14=1.0,  doc14=1.0 hit=15: doc15=1.0,  doc15=1.0 
 hit=16: doc16=1.0,  doc16=1.0 hit=17: doc17=1.0,  doc17=1.0 hit=18: 
 doc18=1.0,  doc18=1.0 hit=19: doc19=1.0,  doc19=1.0 hit=20: doc20=1.0,  
 doc20=1.0 hit=21: doc21=1.0,  doc21=1.0 hit=22: doc22=1.0,  doc22=1.0 
 hit=23: doc23=1.0,  doc23=1.0 hit=24: doc24=1.0,  doc669=1.0 hit=25: 
 doc25=1.0,  doc670=1.0 hit=26: doc26=1.0,  doc671=1.0 hit=27: doc27=1.0,  
 doc672=1.0 hit=28: doc28=1.0,  doc673=1.0 hit=29: doc29=1.0,  doc674=1.0 
 hit=30: doc30=1.0,  doc675=1.0 hit=31: doc31=1.0,  doc676=1.0 hit=32: 
 doc32=1.0,  doc677=1.0 hit=33: doc33=1.0,  doc678=1.0 hit=34: doc34=1.0,  
 doc679=1.0 hit=35: doc35=1.0,  doc680=1.0 hit=36: doc36=1.0,  doc681=1.0 
 hit=37: doc37=1.0,  doc682=1.0 hit=38: doc38=1.0,  doc683=1.0 hit=39: 
 doc39=1.0,  doc684=1.0 hit=40: doc40=1.0,  doc685=1.0 hit=41: doc41=1.0,  
 doc686=1.0 hit=42: doc42=1.0,  doc687=1.0 hit=43: doc43=1.0,  doc688=1.0 
 hit=44: doc44=1.0,  doc689=1.0 hit=45: doc45=1.0,  doc690=1.0 hit=46: 
 doc46=1.0,  doc691=1.0 hit=47: doc47=1.0,  doc692=1.0 hit=48: doc48=1.0,  
 doc693=1.0 hit=49: doc49=1.0,  doc694=1.0 hit=50: doc50=1.0,  doc695=1.0 
 hit=51: doc51=1.0,  doc696=1.0 hit=52: doc52=1.0,  doc697=1.0 hit=53: 
 doc53=1.0,  doc698=1.0 hit=54: doc54=1.0,  doc699=1.0 hit=55: doc55=1.0,  
 doc700=1.0 hit=56: doc56=1.0,  doc701=1.0 hit=57: doc57=1.0,  doc702=1.0 
 hit=58: doc58=1.0,  doc703=1.0 hit=59: doc59=1.0,  doc704=1.0 hit=60: 
 doc60=1.0,  doc705=1.0 hit=61: doc61=1.0,  doc706=1.0 hit=62: doc62=1.0,  
 doc707=1.0 hit=63: doc63=1.0,  doc708=1.0 hit=64: doc64=1.0,  doc709=1.0 
 hit=65: doc65=1.0,  doc710=1.0 hit=66: doc66=1.0,  doc711=1.0 hit=67: 
 doc67=1.0,  doc712=1.0 hit=68: doc68=1.0,  doc713=1.0 hit=69: doc69=1.0,  
 doc714=1.0 hit=70: doc70=1.0,  doc715=1.0 hit=71: doc71=1.0,  doc716=1.0 
 hit=72: doc72=1.0,  doc717=1.0 hit=73: doc73=1.0,  doc718=1.0 hit=74: 
 doc74=1.0,  doc719=1.0 hit=75: doc75=1.0,  doc720=1.0 hit=76: doc76=1.0,  
 doc721=1.0 hit=77: doc77=1.0,  doc722=1.0 hit=78: doc78=1.0,  doc723=1.0 
 hit=79: doc79=1.0,  doc724=1.0 hit=80: doc80=1.0,  doc725=1.0 hit=81: 
 doc81=1.0,  doc726=1.0 hit=82: doc82=1.0,  doc727=1.0 hit=83: doc83=1.0,  
 doc728=1.0 hit=84: doc84=1.0,  doc729=1.0 hit=85: doc85=1.0,  doc730=1.0 
 hit=86: doc86=1.0,  doc731=1.0 hit=87: doc87=1.0,  doc732=1.0 hit=88: 
 doc88=1.0,  doc733=1.0 hit=89: doc89=1.0,  doc734=1.0 hit=90: doc90=1.0,  
 doc735=1.0 hit=91: doc91=1.0,  doc736=1.0 hit=92: doc92=1.0,  doc737=1.0 
 hit=93: doc93=1.0,  doc738=1.0 hit=94: doc94=1.0,  doc739=1.0 hit=95: 
 doc95=1.0,  doc740=1.0 hit=96: doc96=1.0,  doc741=1.0 hit=97: doc97=1.0,  
 doc742=1.0 hit=98: doc98=1.0,  doc743=1.0 hit=99: doc99=1.0,  doc744=1.0 
 hit=100: doc100=1.0,  doc745=1.0 hit=101: doc101=1.0,  doc746=1.0 hit=102: 
 doc102=1.0,  doc747=1.0 hit=103: doc103=1.0,  doc748=1.0 hit=104: 
 doc104=1.0,  doc749=1.0 hit=105: doc105=1.0,  doc750=1.0 hit=106: 
 doc106=1.0,  doc751=1.0 hit=107: doc107=1.0,  doc752=1.0 hit=108: 
 doc108=1.0,  doc753=1.0 hit=109: doc109=1.0,  doc754=1.0 hit=110: 
 doc110=1.0,  doc755=1.0 hit=111: doc111=1.0,  doc756=1.0 hit=112: 
 doc112=1.0,  doc757=1.0 hit=113: doc113=1.0,  doc758=1.0 hit=114: 
 doc114=1.0,  doc759=1.0 hit=115: doc115=1.0,  doc760=1.0 hit=116: 
 doc116=1.0,  doc761=1.0 hit=117: doc117=1.0,  doc762=1.0 hit=118: 
 doc118=1.0,  doc763=1.0 hit=119: doc119=1.0,  doc764=1.0 hit=120: 
 doc120=1.0,  doc765=1.0 hit=121: doc121=1.0,  doc766=1.0 hit=122: 
 doc122=1.0,  doc767=1.0 hit=123: doc123=1.0,  doc768=1.0 hit=124: 
 doc124=1.0,  doc769=1.0 hit=125: doc125=1.0,  doc770=1.0 hit=126: 
 doc126=1.0,  doc771=1.0 hit=127: doc127=1.0,  doc772=1.0 hit=128: 
 doc128=1.0,  doc773=1.0 hit=129: doc129=1.0,  doc774=1.0 hit=130: 
 doc130=1.0,  doc775=1.0 hit=131: doc131=1.0,  doc776=1.0 hit=132: 
 doc132=1.0,  doc777=1.0 hit=133: doc133=1.0,  doc778=1.0 hit=134: 
 doc134=1.0,  doc779=1.0 hit=135: doc135=1.0,  doc780=1.0 hit=136: 
 doc136=1.0,  doc781=1.0 hit=137: doc137=1.0,  doc782=1.0 hit=138: 
 doc138=1.0,  doc783=1.0 

[jira] [Updated] (LUCENE-6184) BooleanScorer should better deal with sparse clauses

2015-01-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6184:
-
Attachment: LUCENE-6184.patch

Updated patch with better documentation of the semantics of BulkScorer.score.

 Can this lead to a better minshouldmatch impl for booleanscorer?

I don't think it would work in the general case yet. This change is useful to 
skip over large numbers of non-matching documents, but it still calls nextDoc() 
all the time, not advance() so I think BS2 is still a better option for now?

 BooleanScorer should better deal with sparse clauses
 

 Key: LUCENE-6184
 URL: https://issues.apache.org/jira/browse/LUCENE-6184
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6184.patch, LUCENE-6184.patch


 The way that BooleanScorer works looks like this:
 {code}
 for each (window of 2048 docs) {
   for each (optional scorer) {
 scorer.score(window)
   }
 }
 {code}
 This is not efficient for very sparse clauses (doc freq much lower than 
 maxDoc/2048) since we keep on scoring windows of documents that do not match 
 anything. BooleanScorer2 currently performs better in those cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 735 - Still Failing

2015-01-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/735/

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
file handle leaks: 
[SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler
 B77BCC13B9C41EA5-001/index-SimpleFSDirectory-043/replication.properties)]

Stack Trace:
java.lang.RuntimeException: file handle leaks: 
[SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler
 B77BCC13B9C41EA5-001/index-SimpleFSDirectory-043/replication.properties)]
at __randomizedtesting.SeedInfo.seed([B77BCC13B9C41EA5]:0)
at org.apache.lucene.mockfile.LeakFS.onClose(LeakFS.java:64)
at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:78)
at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:182)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.Exception
at org.apache.lucene.mockfile.LeakFS.onOpen(LeakFS.java:47)
at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:84)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:259)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:214)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:231)
at java.nio.file.Files.newByteChannel(Files.java:317)
at java.nio.file.Files.newByteChannel(Files.java:363)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:61)
at 
org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2474)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:654)
at 
org.apache.solr.handler.ReplicationHandler.loadReplicationProperties(ReplicationHandler.java:832)
at 
org.apache.solr.handler.SnapPuller.logReplicationTimeAndConfFiles(SnapPuller.java:571)
at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:510)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:343)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:224)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more


FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:19959/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:19959/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([B77BCC13B9C41EA5:369D420BCE9B7E99]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 

[jira] [Commented] (SOLR-6982) bin/solr and SolrCLI should support SSL-related Java System Properties

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279073#comment-14279073
 ] 

ASF subversion and git services commented on SOLR-6982:
---

Commit 1652213 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652213 ]

SOLR-6982: bin/solr and SolrCLI should support SSL-related Java System 
Properties

 bin/solr and SolrCLI should support SSL-related Java System Properties
 --

 Key: SOLR-6982
 URL: https://issues.apache.org/jira/browse/SOLR-6982
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6982.patch


 SolrCLI is used by bin/solr to create collections, run a healthcheck, and get 
 system info. If Solr is running in SSL mode, then these actions won't work 
 unless the proper SSL-related Java system properties are set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279100#comment-14279100
 ] 

Adrien Grand commented on LUCENE-6185:
--

Doc ids are already rebased with the doc base of each reader context before the 
TopDocs.merge call so I think that would be fine? And then when we call 
TopDocs.merge, we provide the top docs instances in the same order as leaves 
have been provided to IndexSearcher.search so tie-breaking by shard id has the 
same effect as tie-breaking by doc id?

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Error building PyLucene with added classes

2015-01-15 Thread Daniel Duma
Update: never mind, I was placing the files in the wrong folder. Solved!

Thanks,
Daniel

On 15 January 2015 at 17:00, Daniel Duma danield...@gmail.com wrote:

 Hi all,

 I have added some classes that I need to Lucene and now I cannot build
 PyLucene 4.9.

 Everything runs fine inside Eclipse, but when copying the .java files to
 the corresponding folders inside the PyLucene source directory and
 rebuilding, I get this error:





 *C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:
 error: cannot find symbolsymbol: class QueryParser*Here is the full output:

 ivy-configure:
 [ivy:configure] :: Apache Ivy 2.4.0-rc1 - 20140315220245 ::
 http://ant.apache.org/ivy/ ::
 [ivy:configure] :: loading settings :: file =
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\ivy-settings.xml

 resolve:

 init:

 -clover.disable:

 -clover.load:

 -clover.classpath:

 -clover.setup:

 clover:

 compile-core:
 [javac] Compiling 734 source files to
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\build\core\classes\java
 [javac]

 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:15:
 error: cannot find symbol
 [javac] MultiFieldQueryParser {
 [javac] ^
 [javac]   symbol: class MultiFieldQueryParser
 [javac]

 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:
 error: cannot find symbol
 [javac] public class FieldAgnosticQueryParser extends QueryParser {
 [javac]   ^
 [javac]   symbol: class QueryParser
 [javac]

 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:23:
 error: method does not override or implement a method from a supertype
 [javac] @Override
 [javac] ^

 BUILD FAILED
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:694: The
 following error occurred while executing this line:
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:480: The
 following error occurred while executing this line:
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:1755:
 Compile failed; see the compiler error output for details.

 PyLucene builds just fine without the added files, and I have checked and
 the files it can't find are where they should be!

 Cheers,
 Daniel



[jira] [Commented] (LUCENE-5735) Faceting for DateRangePrefixTree

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279072#comment-14279072
 ] 

ASF subversion and git services commented on LUCENE-5735:
-

Commit 1652212 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1652212 ]

LUCENE-5735: spatial PrefixTreeFacetCounter (for faceting on SpatialPrefixTrees)

 Faceting for DateRangePrefixTree
 

 Key: LUCENE-5735
 URL: https://issues.apache.org/jira/browse/LUCENE-5735
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.x

 Attachments: LUCENE-5735.patch, LUCENE-5735.patch, 
 LUCENE-5735__PrefixTreeFacetCounter.patch


 The newly added DateRangePrefixTree (DRPT) encodes terms in a fashion 
 amenable to faceting by meaningful time buckets. The motivation for this 
 feature is to efficiently populate a calendar bar chart or 
 [heat-map|http://bl.ocks.org/mbostock/4063318]. It's not hard if you have 
 date instances like many do but it's challenging for date ranges.
 Internally this is going to iterate over the terms using seek/next with 
 TermsEnum as appropriate.  It should be quite efficient; it won't need any 
 special caches. I should be able to re-use SPT traversal code in 
 AbstractVisitingPrefixTreeFilter.  If this goes especially well; the 
 underlying implementation will be re-usable for geospatial heat-map faceting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b20) - Build # 11449 - Still Failing!

2015-01-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11449/
Java: 64bit/jdk1.8.0_40-ea-b20 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DistribCursorPagingTest

Error Message:
Some resources were not closed, shutdown, or released.

Stack Trace:
java.lang.AssertionError: Some resources were not closed, shutdown, or released.
at __randomizedtesting.SeedInfo.seed([65EDB8B6D1E69EF9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 8927 lines...]
   [junit4] Suite: org.apache.solr.cloud.DistribCursorPagingTest
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.DistribCursorPagingTest
 65EDB8B6D1E69EF9-001/init-core-data-001
   [junit4]   2 117931 T568 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 117932 T568 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /
   [junit4]   2 117933 T568 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 117934 T568 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 117934 T569 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 118034 T568 oasc.ZkTestServer.run start zk server on port:56067
   [junit4]   2 118035 T568 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 118035 T568 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 118039 T576 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@19a6b2d9 
name:ZooKeeperConnection Watcher:127.0.0.1:56067 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 118039 T568 oascc.ConnectionManager.waitForConnected Client is 
connected to ZooKeeper
   [junit4]   2 118040 T568 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 118040 T568 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 118043 T568 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 118044 T568 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 118045 T579 

[jira] [Comment Edited] (SOLR-6900) bin/post improvements needed

2015-01-15 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14277938#comment-14277938
 ] 

Erik Hatcher edited comment on SOLR-6900 at 1/15/15 2:39 PM:
-

Latest improvements:

  * Error handling: script now checks several things like collection specified, 
files/directories not mixed with URLs, and that one or more are specified
  * Spaces in file names now handled properly
  * Script works when run from any working directory

Open issues:
  * Windows version not implemented yet (volunteers to get this in for 5.0?  
Otherwise will be deferred to a later version)
  * args (direct string to post to Solr) and stdin not yet supported



was (Author: ehatcher):
Latest improvements:

  * Error handling: script now checks several things like collection specified, 
files/directories not mixed with URLs, and that one more are specified
  * Spaces in file names now handled properly
  * Script works when run from any working directory

Open issues:
  * Windows version not implemented yet (volunteers to get this in for 5.0?  
Otherwise will be deferred to a later version)
  * args (direct string to post to Solr) and stdin not yet supported


 bin/post improvements needed
 

 Key: SOLR-6900
 URL: https://issues.apache.org/jira/browse/SOLR-6900
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Blocker
 Fix For: 5.0, Trunk


 * Fix glob patterns.  They don't work as expected: bin/post collection1 
 \*.xml expands \*.xml such that the script gets all the file names as 
 parameters not just literally \*.xml
 * Add error handling to check that the collection exists
 * Create Windows version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6984) Solr commitwithin is not happening for deletebyId

2015-01-15 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278763#comment-14278763
 ] 

Ishan Chattopadhyaya commented on SOLR-6984:


Hi Sriram, I observed this problem and contributed a patch as part of SOLR-5890 
that deals with this, also fixes other issues relating to deleteById. 


 Solr commitwithin is not happening for deletebyId
 -

 Key: SOLR-6984
 URL: https://issues.apache.org/jira/browse/SOLR-6984
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.6, Trunk
Reporter: sriram vaithianathan
 Fix For: 4.10.4, 5.0, Trunk

 Attachments: 4_10_3-SOLR-6984.patch, trunk-SOLR-6984.patch


 Hi All,
 Just found that solrj doesnot use commitwithin while using deletebyid. This 
 issue is discussed in 
 http://grokbase.com/t/lucene/solr-user/1275gkpntd/deletebyid-commitwithin-question
 Faced the same issue today and found that, in 
 org.apache.solr.client.solrj.request.UpdateRequest when new UpdateRequest is 
 created in getRoutes() method ( Line number 244 ), the setCommitWithin param 
 is not set in the urequest variable as it is done few lines above (Line 
 number 204) This causes setCommitWithin to revert to default value of -1 and 
 the commit does not happen. Tried setting that like,
 urequest.setCommitWithin(getCommitWithin()) and the feature is enabled from 
 SolrJ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6184) BooleanScorer should better deal with sparse clauses

2015-01-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6184:
-
Attachment: LUCENE-6184.patch

Here is a patch:
 - BulkScorer now returns a hint on the next matching doc after {{max}}
 - BooleanScorer uses this information in order to only score windows of 
documents where at least one clause matches (by putting the bulk scorers into a 
priority queue)

This helps boolean queries with dense clauses since this helped remove the 
{{hasMatches}} optimization which helps not iterate over the bit set if there 
are no matches but had the drawback of making OrCollector.collect
heavier.

And this helps boolean queries with very sparse clauses since they now only 
collect windows where they have matches.

Here is the result of the luceneutil benchmark on the 10M wikipedia corpus. I 
added some tasks to test sparse clauses: VeryLow is for term queries that have 
a doc freq between 400 and 500, and VeryLowVeryLow is a disjunction of 2 such 
terms:

{code}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
HighSloppyPhrase   32.70  (4.3%)   32.39  (4.0%)   
-1.0% (  -8% -7%)
 Prefix3  162.73  (5.8%)  161.32  (6.6%)   
-0.9% ( -12% -   12%)
 LowTerm  803.22  (6.2%)  797.47  (6.2%)   
-0.7% ( -12% -   12%)
  IntNRQ   13.84  (6.9%)   13.75  (7.3%)   
-0.7% ( -13% -   14%)
OrHighNotLow   60.36  (2.7%)   59.96  (3.9%)   
-0.7% (  -7% -6%)
 LowSloppyPhrase   17.94  (3.0%)   17.82  (2.8%)   
-0.7% (  -6% -5%)
 VeryLow 6095.14  (5.8%) 6057.73  (5.0%)   
-0.6% ( -10% -   10%)
   LowPhrase  276.59  (2.2%)  274.97  (1.6%)   
-0.6% (  -4% -3%)
OrHighNotMed   43.56  (2.6%)   43.32  (3.3%)   
-0.6% (  -6% -5%)
OrNotHighLow  924.37  (2.5%)  919.21  (2.4%)   
-0.6% (  -5% -4%)
  AndHighLow  703.38  (2.9%)  699.62  (3.6%)   
-0.5% (  -6% -6%)
Wildcard   93.74  (3.1%)   93.29  (3.0%)   
-0.5% (  -6% -5%)
 MedSloppyPhrase   79.24  (2.8%)   78.91  (2.3%)   
-0.4% (  -5% -4%)
OrNotHighMed  207.14  (2.0%)  206.31  (2.2%)   
-0.4% (  -4% -3%)
HighSpanNear   12.56  (0.9%)   12.53  (1.1%)   
-0.2% (  -2% -1%)
  HighPhrase   13.58  (2.3%)   13.55  (2.1%)   
-0.2% (  -4% -4%)
   OrHighNotHigh   33.29  (1.6%)   33.24  (2.0%)   
-0.2% (  -3% -3%)
   OrNotHighHigh   56.10  (1.6%)   56.00  (1.8%)   
-0.2% (  -3% -3%)
HighTerm   91.52  (2.6%)   91.37  (2.7%)   
-0.2% (  -5% -5%)
 Respell   71.63  (5.5%)   71.52  (5.3%)   
-0.1% ( -10% -   11%)
 LowSpanNear   18.17  (1.0%)   18.16  (0.8%)   
-0.1% (  -1% -1%)
 MedTerm  146.69  (2.5%)  146.56  (3.0%)   
-0.1% (  -5% -5%)
  AndHighMed  274.22  (2.6%)  274.00  (2.3%)   
-0.1% (  -4% -4%)
 MedSpanNear   31.01  (0.9%)   31.00  (1.1%)   
-0.0% (  -1% -1%)
 AndHighHigh   77.34  (1.8%)   77.32  (1.7%)   
-0.0% (  -3% -3%)
   MedPhrase   19.10  (6.2%)   19.10  (6.2%)
0.0% ( -11% -   13%)
  Fuzzy2   26.84  (6.8%)   26.88  (7.6%)
0.1% ( -13% -   15%)
PKLookup  272.91  (3.1%)  274.16  (2.7%)
0.5% (  -5% -6%)
   OrHighMed   59.25 (11.8%)   62.90  (6.5%)
6.2% ( -10% -   27%)
   OrHighLow   64.54 (11.9%)   68.73  (6.5%)
6.5% ( -10% -   28%)
  OrHighHigh   42.89 (12.2%)   45.77  (6.9%)
6.7% ( -11% -   29%)
  Fuzzy1   95.20  (4.2%)  101.65  (5.9%)
6.8% (  -3% -   17%)
  VeryLowVeryLow 1936.31  (3.2%) 2263.44  (3.3%)   
16.9% (  10% -   24%)
{code}

 BooleanScorer should better deal with sparse clauses
 

 Key: LUCENE-6184
 URL: https://issues.apache.org/jira/browse/LUCENE-6184
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6184.patch


 The way that BooleanScorer works looks like this:
 {code}
 for each (window of 2048 docs) {
   for 

[jira] [Commented] (SOLR-6770) Add/edit param sets and use them in Requests

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278814#comment-14278814
 ] 

ASF subversion and git services commented on SOLR-6770:
---

Commit 1652137 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652137 ]

SOLR-6770 mask the useParams after expanding it

 Add/edit param sets and use them in Requests
 

 Key: SOLR-6770
 URL: https://issues.apache.org/jira/browse/SOLR-6770
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6770.patch, SOLR-6770.patch, SOLR-6770.patch


 Make it possible to define paramsets and use them directly in requests
 example
 {code}
 curl http://localhost:8983/solr/collection1/config/params -H 
 'Content-type:application/json'  -d '{
 set : {x: {
   a:A val,
   b: B val}
},
 set : {y: {
x:X val,
Y: Y val}
},
 update : {y: {
x:X val modified}
},
 delete : z
 }'
 #do a GET to view all the configured params
 curl http://localhost:8983/solr/collection1/config/params
 #or  GET with a specific name to get only one set of params
 curl http://localhost:8983/solr/collection1/config/params/x
 {code}
 This data will be stored in conf/params.json
 This is used requesttime and adding/editing params will not result in core 
 reload and it will have no impact on the performance 
 example usage http://localhost/solr/collection/select?useParams=x,y
 or it can be directly configured with a request handler as follows
 {code}
 requestHandler name=/dump1 class=DumpRequestHandler useParams=x/
 {code}
  {{useParams}} specified in request overrides the one specified in 
 {{requestHandler}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40-ea-b20) - Build # 11448 - Failure!

2015-01-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11448/
Java: 32bit/jdk1.8.0_40-ea-b20 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.expressions.TestExpressionSorts.testQueries

Error Message:
Hit 24 docnumbers don't match Hits length1=231 length2=231 hit=0: doc0=1.0,  
doc0=1.0 hit=1: doc1=1.0,  doc1=1.0 hit=2: doc2=1.0,  doc2=1.0 hit=3: doc3=1.0, 
 doc3=1.0 hit=4: doc4=1.0,  doc4=1.0 hit=5: doc5=1.0,  doc5=1.0 hit=6: 
doc6=1.0,  doc6=1.0 hit=7: doc7=1.0,  doc7=1.0 hit=8: doc8=1.0,  doc8=1.0 
hit=9: doc9=1.0,  doc9=1.0 hit=10: doc10=1.0,  doc10=1.0 hit=11: doc11=1.0,  
doc11=1.0 hit=12: doc12=1.0,  doc12=1.0 hit=13: doc13=1.0,  doc13=1.0 hit=14: 
doc14=1.0,  doc14=1.0 hit=15: doc15=1.0,  doc15=1.0 hit=16: doc16=1.0,  
doc16=1.0 hit=17: doc17=1.0,  doc17=1.0 hit=18: doc18=1.0,  doc18=1.0 hit=19: 
doc19=1.0,  doc19=1.0 hit=20: doc20=1.0,  doc20=1.0 hit=21: doc21=1.0,  
doc21=1.0 hit=22: doc22=1.0,  doc22=1.0 hit=23: doc23=1.0,  doc23=1.0 hit=24: 
doc24=1.0,  doc669=1.0 hit=25: doc25=1.0,  doc670=1.0 hit=26: doc26=1.0,  
doc671=1.0 hit=27: doc27=1.0,  doc672=1.0 hit=28: doc28=1.0,  doc673=1.0 
hit=29: doc29=1.0,  doc674=1.0 hit=30: doc30=1.0,  doc675=1.0 hit=31: 
doc31=1.0,  doc676=1.0 hit=32: doc32=1.0,  doc677=1.0 hit=33: doc33=1.0,  
doc678=1.0 hit=34: doc34=1.0,  doc679=1.0 hit=35: doc35=1.0,  doc680=1.0 
hit=36: doc36=1.0,  doc681=1.0 hit=37: doc37=1.0,  doc682=1.0 hit=38: 
doc38=1.0,  doc683=1.0 hit=39: doc39=1.0,  doc684=1.0 hit=40: doc40=1.0,  
doc685=1.0 hit=41: doc41=1.0,  doc686=1.0 hit=42: doc42=1.0,  doc687=1.0 
hit=43: doc43=1.0,  doc688=1.0 hit=44: doc44=1.0,  doc689=1.0 hit=45: 
doc45=1.0,  doc690=1.0 hit=46: doc46=1.0,  doc691=1.0 hit=47: doc47=1.0,  
doc692=1.0 hit=48: doc48=1.0,  doc693=1.0 hit=49: doc49=1.0,  doc694=1.0 
hit=50: doc50=1.0,  doc695=1.0 hit=51: doc51=1.0,  doc696=1.0 hit=52: 
doc52=1.0,  doc697=1.0 hit=53: doc53=1.0,  doc698=1.0 hit=54: doc54=1.0,  
doc699=1.0 hit=55: doc55=1.0,  doc700=1.0 hit=56: doc56=1.0,  doc701=1.0 
hit=57: doc57=1.0,  doc702=1.0 hit=58: doc58=1.0,  doc703=1.0 hit=59: 
doc59=1.0,  doc704=1.0 hit=60: doc60=1.0,  doc705=1.0 hit=61: doc61=1.0,  
doc706=1.0 hit=62: doc62=1.0,  doc707=1.0 hit=63: doc63=1.0,  doc708=1.0 
hit=64: doc64=1.0,  doc709=1.0 hit=65: doc65=1.0,  doc710=1.0 hit=66: 
doc66=1.0,  doc711=1.0 hit=67: doc67=1.0,  doc712=1.0 hit=68: doc68=1.0,  
doc713=1.0 hit=69: doc69=1.0,  doc714=1.0 hit=70: doc70=1.0,  doc715=1.0 
hit=71: doc71=1.0,  doc716=1.0 hit=72: doc72=1.0,  doc717=1.0 hit=73: 
doc73=1.0,  doc718=1.0 hit=74: doc74=1.0,  doc719=1.0 hit=75: doc75=1.0,  
doc720=1.0 hit=76: doc76=1.0,  doc721=1.0 hit=77: doc77=1.0,  doc722=1.0 
hit=78: doc78=1.0,  doc723=1.0 hit=79: doc79=1.0,  doc724=1.0 hit=80: 
doc80=1.0,  doc725=1.0 hit=81: doc81=1.0,  doc726=1.0 hit=82: doc82=1.0,  
doc727=1.0 hit=83: doc83=1.0,  doc728=1.0 hit=84: doc84=1.0,  doc729=1.0 
hit=85: doc85=1.0,  doc730=1.0 hit=86: doc86=1.0,  doc731=1.0 hit=87: 
doc87=1.0,  doc732=1.0 hit=88: doc88=1.0,  doc733=1.0 hit=89: doc89=1.0,  
doc734=1.0 hit=90: doc90=1.0,  doc735=1.0 hit=91: doc91=1.0,  doc736=1.0 
hit=92: doc92=1.0,  doc737=1.0 hit=93: doc93=1.0,  doc738=1.0 hit=94: 
doc94=1.0,  doc739=1.0 hit=95: doc95=1.0,  doc740=1.0 hit=96: doc96=1.0,  
doc741=1.0 hit=97: doc97=1.0,  doc742=1.0 hit=98: doc98=1.0,  doc743=1.0 
hit=99: doc99=1.0,  doc744=1.0 hit=100: doc100=1.0,  doc745=1.0 hit=101: 
doc101=1.0,  doc746=1.0 hit=102: doc102=1.0,  doc747=1.0 hit=103: doc103=1.0,  
doc748=1.0 hit=104: doc104=1.0,  doc749=1.0 hit=105: doc105=1.0,  doc750=1.0 
hit=106: doc106=1.0,  doc751=1.0 hit=107: doc107=1.0,  doc752=1.0 hit=108: 
doc108=1.0,  doc753=1.0 hit=109: doc109=1.0,  doc754=1.0 hit=110: doc110=1.0,  
doc755=1.0 hit=111: doc111=1.0,  doc756=1.0 hit=112: doc112=1.0,  doc757=1.0 
hit=113: doc113=1.0,  doc758=1.0 hit=114: doc114=1.0,  doc759=1.0 hit=115: 
doc115=1.0,  doc760=1.0 hit=116: doc116=1.0,  doc761=1.0 hit=117: doc117=1.0,  
doc762=1.0 hit=118: doc118=1.0,  doc763=1.0 hit=119: doc119=1.0,  doc764=1.0 
hit=120: doc120=1.0,  doc765=1.0 hit=121: doc121=1.0,  doc766=1.0 hit=122: 
doc122=1.0,  doc767=1.0 hit=123: doc123=1.0,  doc768=1.0 hit=124: doc124=1.0,  
doc769=1.0 hit=125: doc125=1.0,  doc770=1.0 hit=126: doc126=1.0,  doc771=1.0 
hit=127: doc127=1.0,  doc772=1.0 hit=128: doc128=1.0,  doc773=1.0 hit=129: 
doc129=1.0,  doc774=1.0 hit=130: doc130=1.0,  doc775=1.0 hit=131: doc131=1.0,  
doc776=1.0 hit=132: doc132=1.0,  doc777=1.0 hit=133: doc133=1.0,  doc778=1.0 
hit=134: doc134=1.0,  doc779=1.0 hit=135: doc135=1.0,  doc780=1.0 hit=136: 
doc136=1.0,  doc781=1.0 hit=137: doc137=1.0,  doc782=1.0 hit=138: doc138=1.0,  
doc783=1.0 hit=139: doc139=1.0,  doc784=1.0 hit=140: doc140=1.0,  doc785=1.0 
hit=141: doc141=1.0,  doc786=1.0 hit=142: doc142=1.0,  doc787=1.0 hit=143: 
doc143=1.0,  doc788=1.0 hit=144: doc144=1.0,  doc789=1.0 hit=145: doc145=1.0,  
doc790=1.0 hit=146: doc146=1.0,  doc791=1.0 hit=147: doc147=1.0,  doc792=1.0 
hit=148: 

[jira] [Resolved] (SOLR-6941) DistributedQueue#containsTaskWithRequestId can fail with NPE.

2015-01-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6941.
---
Resolution: Fixed

 DistributedQueue#containsTaskWithRequestId can fail with NPE.
 -

 Key: SOLR-6941
 URL: https://issues.apache.org/jira/browse/SOLR-6941
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: 5.0, Trunk

 Attachments: SOLR-6941.patch


 I've seen this happen some recently. Seems data can be return as null and we 
 need to guard against it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6931) We should do a limited retry when using HttpClient.

2015-01-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6931.
---
Resolution: Fixed

 We should do a limited retry when using HttpClient.
 ---

 Key: SOLR-6931
 URL: https://issues.apache.org/jira/browse/SOLR-6931
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6931.patch, SOLR-6931.patch, SOLR-6931.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6943) HdfsDirectoryFactory should fall back to system props for most of it's config if it is not found in solrconfig.xml.

2015-01-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6943.
---
Resolution: Fixed

 HdfsDirectoryFactory should fall back to system props for most of it's config 
 if it is not found in solrconfig.xml.
 ---

 Key: SOLR-6943
 URL: https://issues.apache.org/jira/browse/SOLR-6943
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6943.patch, SOLR-6943.patch


 The new server and config sets has undone the work I did to make hdfs easy 
 out of the box. Rather than count on config for that, we should just allow 
 most of this config to be specified at the sys property level. This improves 
 the global cache config situation as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6184) BooleanScorer should better deal with sparse clauses

2015-01-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278794#comment-14278794
 ] 

Robert Muir commented on LUCENE-6184:
-

The trickiest part here are the new semantics of the return value. But 
FilteredQuery has a code comment that maybe should be moved to BulkScorer's 
docs to help elaborate (there is a small typo too in the comment).

Can this lead to a better minshouldmatch impl for booleanscorer? I'm just as 
happy with it removed too, but i know a while ago we benchmarked that BS1 can 
still be faster for that query, so its just a possibility. Maybe it should just 
stay as a pure disjunction scorer.

 BooleanScorer should better deal with sparse clauses
 

 Key: LUCENE-6184
 URL: https://issues.apache.org/jira/browse/LUCENE-6184
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6184.patch


 The way that BooleanScorer works looks like this:
 {code}
 for each (window of 2048 docs) {
   for each (optional scorer) {
 scorer.score(window)
   }
 }
 {code}
 This is not efficient for very sparse clauses (doc freq much lower than 
 maxDoc/2048) since we keep on scoring windows of documents that do not match 
 anything. BooleanScorer2 currently performs better in those cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6880) ZKStateReader makes a call to updateWatchedCollection, which doesn't accept null with a method creating the argument that can return null.

2015-01-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6880.
---
Resolution: Fixed

 ZKStateReader makes a call to updateWatchedCollection, which doesn't accept 
 null with a method creating the argument that can return null.
 --

 Key: SOLR-6880
 URL: https://issues.apache.org/jira/browse/SOLR-6880
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6880.patch, SOLR-6880.patch


 I've seen the resulting NPE in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6181) Move spatial pointsOnly from RPT to superclass - PrefixTreeStrategy

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278854#comment-14278854
 ] 

ASF subversion and git services commented on LUCENE-6181:
-

Commit 1652147 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1652147 ]

LUCENE-6181: spatial move pointsOnly to superclass and add some getters too.

 Move spatial pointsOnly from RPT to superclass - PrefixTreeStrategy
 ---

 Key: LUCENE-6181
 URL: https://issues.apache.org/jira/browse/LUCENE-6181
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Attachments: LUCENE-6181.patch


 The 'points only' hint should be at PrefixTreeStrategy, not at RPT.  The Term 
 strategy subclass may not use it (yet) but conveys intent and prevents a 
 needless cast in faceting on PrefixTreeStrategy generally in a separate issue.
 The attached patch also adds some getters for good measure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6182) Spatial VisitorTemplate.visitScanned needn't be abstract

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278872#comment-14278872
 ] 

ASF subversion and git services commented on LUCENE-6182:
-

Commit 1652156 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652156 ]

LUCENE-6182: spatial refactor VisitorTemplate.visitScanned needn't be abstract.
And have collectDocs specify BitSet not FixedBitSet.  (these are internal APIs)

 Spatial VisitorTemplate.visitScanned needn't be abstract
 

 Key: LUCENE-6182
 URL: https://issues.apache.org/jira/browse/LUCENE-6182
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Fix For: 5.x

 Attachments: LUCENE-6182.patch


 visitScanned can be implemented, allowing subclasses to specialize if desired.
 {code:java}
 protected void visitScanned(Cell cell) throws IOException {
   if (queryShape.relate(cell.getShape()).intersects()) {
 if (cell.isLeaf()) {
   visitLeaf(cell);
 } else {
   visit(cell);
 }
   }
 }
 {code}
 Then I can remove Intersect's impl, and remove the one prefix-tree faceting.
 Additionally, I noticed collectDocs(FixBitSet) can be improved to take BitSet 
 and call bitSet.or(docsEnum)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40-ea-b20) - Build # 11448 - Failure!

2015-01-15 Thread Adrien Grand
I'm looking into it.

On Thu, Jan 15, 2015 at 5:15 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11448/
 Java: 32bit/jdk1.8.0_40-ea-b20 -client -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:  org.apache.lucene.expressions.TestExpressionSorts.testQueries

 Error Message:
 Hit 24 docnumbers don't match Hits length1=231 length2=231 hit=0: doc0=1.0,  
 doc0=1.0 hit=1: doc1=1.0,  doc1=1.0 hit=2: doc2=1.0,  doc2=1.0 hit=3: 
 doc3=1.0,  doc3=1.0 hit=4: doc4=1.0,  doc4=1.0 hit=5: doc5=1.0,  doc5=1.0 
 hit=6: doc6=1.0,  doc6=1.0 hit=7: doc7=1.0,  doc7=1.0 hit=8: doc8=1.0,  
 doc8=1.0 hit=9: doc9=1.0,  doc9=1.0 hit=10: doc10=1.0,  doc10=1.0 hit=11: 
 doc11=1.0,  doc11=1.0 hit=12: doc12=1.0,  doc12=1.0 hit=13: doc13=1.0,  
 doc13=1.0 hit=14: doc14=1.0,  doc14=1.0 hit=15: doc15=1.0,  doc15=1.0 hit=16: 
 doc16=1.0,  doc16=1.0 hit=17: doc17=1.0,  doc17=1.0 hit=18: doc18=1.0,  
 doc18=1.0 hit=19: doc19=1.0,  doc19=1.0 hit=20: doc20=1.0,  doc20=1.0 hit=21: 
 doc21=1.0,  doc21=1.0 hit=22: doc22=1.0,  doc22=1.0 hit=23: doc23=1.0,  
 doc23=1.0 hit=24: doc24=1.0,  doc669=1.0 hit=25: doc25=1.0,  doc670=1.0 
 hit=26: doc26=1.0,  doc671=1.0 hit=27: doc27=1.0,  doc672=1.0 hit=28: 
 doc28=1.0,  doc673=1.0 hit=29: doc29=1.0,  doc674=1.0 hit=30: doc30=1.0,  
 doc675=1.0 hit=31: doc31=1.0,  doc676=1.0 hit=32: doc32=1.0,  doc677=1.0 
 hit=33: doc33=1.0,  doc678=1.0 hit=34: doc34=1.0,  doc679=1.0 hit=35: 
 doc35=1.0,  doc680=1.0 hit=36: doc36=1.0,  doc681=1.0 hit=37: doc37=1.0,  
 doc682=1.0 hit=38: doc38=1.0,  doc683=1.0 hit=39: doc39=1.0,  doc684=1.0 
 hit=40: doc40=1.0,  doc685=1.0 hit=41: doc41=1.0,  doc686=1.0 hit=42: 
 doc42=1.0,  doc687=1.0 hit=43: doc43=1.0,  doc688=1.0 hit=44: doc44=1.0,  
 doc689=1.0 hit=45: doc45=1.0,  doc690=1.0 hit=46: doc46=1.0,  doc691=1.0 
 hit=47: doc47=1.0,  doc692=1.0 hit=48: doc48=1.0,  doc693=1.0 hit=49: 
 doc49=1.0,  doc694=1.0 hit=50: doc50=1.0,  doc695=1.0 hit=51: doc51=1.0,  
 doc696=1.0 hit=52: doc52=1.0,  doc697=1.0 hit=53: doc53=1.0,  doc698=1.0 
 hit=54: doc54=1.0,  doc699=1.0 hit=55: doc55=1.0,  doc700=1.0 hit=56: 
 doc56=1.0,  doc701=1.0 hit=57: doc57=1.0,  doc702=1.0 hit=58: doc58=1.0,  
 doc703=1.0 hit=59: doc59=1.0,  doc704=1.0 hit=60: doc60=1.0,  doc705=1.0 
 hit=61: doc61=1.0,  doc706=1.0 hit=62: doc62=1.0,  doc707=1.0 hit=63: 
 doc63=1.0,  doc708=1.0 hit=64: doc64=1.0,  doc709=1.0 hit=65: doc65=1.0,  
 doc710=1.0 hit=66: doc66=1.0,  doc711=1.0 hit=67: doc67=1.0,  doc712=1.0 
 hit=68: doc68=1.0,  doc713=1.0 hit=69: doc69=1.0,  doc714=1.0 hit=70: 
 doc70=1.0,  doc715=1.0 hit=71: doc71=1.0,  doc716=1.0 hit=72: doc72=1.0,  
 doc717=1.0 hit=73: doc73=1.0,  doc718=1.0 hit=74: doc74=1.0,  doc719=1.0 
 hit=75: doc75=1.0,  doc720=1.0 hit=76: doc76=1.0,  doc721=1.0 hit=77: 
 doc77=1.0,  doc722=1.0 hit=78: doc78=1.0,  doc723=1.0 hit=79: doc79=1.0,  
 doc724=1.0 hit=80: doc80=1.0,  doc725=1.0 hit=81: doc81=1.0,  doc726=1.0 
 hit=82: doc82=1.0,  doc727=1.0 hit=83: doc83=1.0,  doc728=1.0 hit=84: 
 doc84=1.0,  doc729=1.0 hit=85: doc85=1.0,  doc730=1.0 hit=86: doc86=1.0,  
 doc731=1.0 hit=87: doc87=1.0,  doc732=1.0 hit=88: doc88=1.0,  doc733=1.0 
 hit=89: doc89=1.0,  doc734=1.0 hit=90: doc90=1.0,  doc735=1.0 hit=91: 
 doc91=1.0,  doc736=1.0 hit=92: doc92=1.0,  doc737=1.0 hit=93: doc93=1.0,  
 doc738=1.0 hit=94: doc94=1.0,  doc739=1.0 hit=95: doc95=1.0,  doc740=1.0 
 hit=96: doc96=1.0,  doc741=1.0 hit=97: doc97=1.0,  doc742=1.0 hit=98: 
 doc98=1.0,  doc743=1.0 hit=99: doc99=1.0,  doc744=1.0 hit=100: doc100=1.0,  
 doc745=1.0 hit=101: doc101=1.0,  doc746=1.0 hit=102: doc102=1.0,  doc747=1.0 
 hit=103: doc103=1.0,  doc748=1.0 hit=104: doc104=1.0,  doc749=1.0 hit=105: 
 doc105=1.0,  doc750=1.0 hit=106: doc106=1.0,  doc751=1.0 hit=107: doc107=1.0, 
  doc752=1.0 hit=108: doc108=1.0,  doc753=1.0 hit=109: doc109=1.0,  doc754=1.0 
 hit=110: doc110=1.0,  doc755=1.0 hit=111: doc111=1.0,  doc756=1.0 hit=112: 
 doc112=1.0,  doc757=1.0 hit=113: doc113=1.0,  doc758=1.0 hit=114: doc114=1.0, 
  doc759=1.0 hit=115: doc115=1.0,  doc760=1.0 hit=116: doc116=1.0,  doc761=1.0 
 hit=117: doc117=1.0,  doc762=1.0 hit=118: doc118=1.0,  doc763=1.0 hit=119: 
 doc119=1.0,  doc764=1.0 hit=120: doc120=1.0,  doc765=1.0 hit=121: doc121=1.0, 
  doc766=1.0 hit=122: doc122=1.0,  doc767=1.0 hit=123: doc123=1.0,  doc768=1.0 
 hit=124: doc124=1.0,  doc769=1.0 hit=125: doc125=1.0,  doc770=1.0 hit=126: 
 doc126=1.0,  doc771=1.0 hit=127: doc127=1.0,  doc772=1.0 hit=128: doc128=1.0, 
  doc773=1.0 hit=129: doc129=1.0,  doc774=1.0 hit=130: doc130=1.0,  doc775=1.0 
 hit=131: doc131=1.0,  doc776=1.0 hit=132: doc132=1.0,  doc777=1.0 hit=133: 
 doc133=1.0,  doc778=1.0 hit=134: doc134=1.0,  doc779=1.0 hit=135: doc135=1.0, 
  doc780=1.0 hit=136: doc136=1.0,  doc781=1.0 hit=137: doc137=1.0,  doc782=1.0 
 hit=138: doc138=1.0,  doc783=1.0 hit=139: doc139=1.0,  doc784=1.0 hit=140: 
 doc140=1.0,  doc785=1.0 hit=141: doc141=1.0,  doc786=1.0 hit=142: doc142=1.0, 
  doc787=1.0 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2486 - Still Failing

2015-01-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2486/

6 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([C686D7E7AF3D8754]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([C686D7E7AF3D8754]:0)


FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([C686D7E7AF3D8754:476059FFD862E768]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (LUCENE-6182) Spatial VisitorTemplate.visitScanned needn't be abstract

2015-01-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-6182.
--
   Resolution: Fixed
Fix Version/s: (was: 5.x)
   5.1

 Spatial VisitorTemplate.visitScanned needn't be abstract
 

 Key: LUCENE-6182
 URL: https://issues.apache.org/jira/browse/LUCENE-6182
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Fix For: 5.1

 Attachments: LUCENE-6182.patch


 visitScanned can be implemented, allowing subclasses to specialize if desired.
 {code:java}
 protected void visitScanned(Cell cell) throws IOException {
   if (queryShape.relate(cell.getShape()).intersects()) {
 if (cell.isLeaf()) {
   visitLeaf(cell);
 } else {
   visit(cell);
 }
   }
 }
 {code}
 Then I can remove Intersect's impl, and remove the one prefix-tree faceting.
 Additionally, I noticed collectDocs(FixBitSet) can be improved to take BitSet 
 and call bitSet.or(docsEnum)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6900) bin/post improvements needed

2015-01-15 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278921#comment-14278921
 ] 

Alexandre Rafalovitch commented on SOLR-6900:
-

+1 for Delete. Can't do examples end to end without deleting. 

Also, does the tool supports stand-alone commit commands (without files)? 

 bin/post improvements needed
 

 Key: SOLR-6900
 URL: https://issues.apache.org/jira/browse/SOLR-6900
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Blocker
 Fix For: 5.0, Trunk


 * Fix glob patterns.  They don't work as expected: bin/post collection1 
 \*.xml expands \*.xml such that the script gets all the file names as 
 parameters not just literally \*.xml
 * Add error handling to check that the collection exists
 * Create Windows version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6184) BooleanScorer should better deal with sparse clauses

2015-01-15 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6184:


 Summary: BooleanScorer should better deal with sparse clauses
 Key: LUCENE-6184
 URL: https://issues.apache.org/jira/browse/LUCENE-6184
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1


The way that BooleanScorer works looks like this:
{code}
for each (window of 2048 docs) {
  for each (optional scorer) {
scorer.score(window)
  }
}
{code}

This is not efficient for very sparse clauses (doc freq much lower than 
maxDoc/2048) since we keep on scoring windows of documents that do not match 
anything. BooleanScorer2 currently performs better in those cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6181) Move spatial pointsOnly from RPT to superclass - PrefixTreeStrategy

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278858#comment-14278858
 ] 

ASF subversion and git services commented on LUCENE-6181:
-

Commit 1652149 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652149 ]

LUCENE-6181: spatial move pointsOnly to superclass and add some getters too.

 Move spatial pointsOnly from RPT to superclass - PrefixTreeStrategy
 ---

 Key: LUCENE-6181
 URL: https://issues.apache.org/jira/browse/LUCENE-6181
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Attachments: LUCENE-6181.patch


 The 'points only' hint should be at PrefixTreeStrategy, not at RPT.  The Term 
 strategy subclass may not use it (yet) but conveys intent and prevents a 
 needless cast in faceting on PrefixTreeStrategy generally in a separate issue.
 The attached patch also adds some getters for good measure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6181) Move spatial pointsOnly from RPT to superclass - PrefixTreeStrategy

2015-01-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-6181.
--
   Resolution: Fixed
Fix Version/s: 5.1

 Move spatial pointsOnly from RPT to superclass - PrefixTreeStrategy
 ---

 Key: LUCENE-6181
 URL: https://issues.apache.org/jira/browse/LUCENE-6181
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Fix For: 5.1

 Attachments: LUCENE-6181.patch


 The 'points only' hint should be at PrefixTreeStrategy, not at RPT.  The Term 
 strategy subclass may not use it (yet) but conveys intent and prevents a 
 needless cast in faceting on PrefixTreeStrategy generally in a separate issue.
 The attached patch also adds some getters for good measure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-01-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278893#comment-14278893
 ] 

Mark Miller commented on SOLR-6367:
---

I'll try again from fresh.

bq. The file has nothing written to it before the crash.

I inspected the file before kill -9, and while it will be reported as 0 size, I 
could open and view the doc in the tlog file.

 empty tlog on HDFS when hard crash - no docs to replay on recovery
 --

 Key: SOLR-6367
 URL: https://issues.apache.org/jira/browse/SOLR-6367
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Mark Miller
 Fix For: 5.0, Trunk


 Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
 Jul 2014)...
 {panel}
 Reproduce steps:
 1) Setup Solr to run on HDFS like this:
 {noformat}
 java -Dsolr.directoryFactory=HdfsDirectoryFactory
  -Dsolr.lock.type=hdfs
  -Dsolr.hdfs.home=hdfs://host:port/path
 {noformat}
 For the purpose of this testing, turn off the default auto commit in 
 solrconfig.xml, i.e. comment out autoCommit like this:
 {code}
 !--
 autoCommit
maxTime${solr.autoCommit.maxTime:15000}/maxTime
openSearcherfalse/openSearcher
  /autoCommit
 --
 {code}
 2) Add a document without commit:
 {{curl http://localhost:8983/solr/collection1/update?commit=false; -H
 Content-type:text/xml; charset=utf-8 --data-binary @solr.xml}}
 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
 {noformat}
 [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
 /path/collection1/core_node1/data/tlog
 Found 5 items
 -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.001
 -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.003
 -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.004
 -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
 /path/collection1/core_node1/data/tlog/tlog.005
 -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
 /path/collection1/core_node1/data/tlog/tlog.006
 {noformat}
 4) Simulate Solr crash by killing the process with -9 option.
 5) restart the Solr process. Observation is that uncommitted document are
 not replayed, files in tlog directory are cleaned up. Hence uncommitted
 document(s) is lost.
 Am I missing anything or this is a bug?
 BTW, additional observations:
 a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
 non-empty tlog file is geneated and after re-starting Solr, uncommitted
 document is replayed as expected.
 b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
 not observed either.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6770) Add/edit param sets and use them in Requests

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278863#comment-14278863
 ] 

ASF subversion and git services commented on SOLR-6770:
---

Commit 1652153 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1652153 ]

SOLR-6770 accidentally commented out a test

 Add/edit param sets and use them in Requests
 

 Key: SOLR-6770
 URL: https://issues.apache.org/jira/browse/SOLR-6770
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6770.patch, SOLR-6770.patch, SOLR-6770.patch


 Make it possible to define paramsets and use them directly in requests
 example
 {code}
 curl http://localhost:8983/solr/collection1/config/params -H 
 'Content-type:application/json'  -d '{
 set : {x: {
   a:A val,
   b: B val}
},
 set : {y: {
x:X val,
Y: Y val}
},
 update : {y: {
x:X val modified}
},
 delete : z
 }'
 #do a GET to view all the configured params
 curl http://localhost:8983/solr/collection1/config/params
 #or  GET with a specific name to get only one set of params
 curl http://localhost:8983/solr/collection1/config/params/x
 {code}
 This data will be stored in conf/params.json
 This is used requesttime and adding/editing params will not result in core 
 reload and it will have no impact on the performance 
 example usage http://localhost/solr/collection/select?useParams=x,y
 or it can be directly configured with a request handler as follows
 {code}
 requestHandler name=/dump1 class=DumpRequestHandler useParams=x/
 {code}
  {{useParams}} specified in request overrides the one specified in 
 {{requestHandler}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6770) Add/edit param sets and use them in Requests

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278883#comment-14278883
 ] 

ASF subversion and git services commented on SOLR-6770:
---

Commit 1652159 from [~noble.paul] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1652159 ]

SOLR-6770 mask the useParams after expanding it

 Add/edit param sets and use them in Requests
 

 Key: SOLR-6770
 URL: https://issues.apache.org/jira/browse/SOLR-6770
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6770.patch, SOLR-6770.patch, SOLR-6770.patch


 Make it possible to define paramsets and use them directly in requests
 example
 {code}
 curl http://localhost:8983/solr/collection1/config/params -H 
 'Content-type:application/json'  -d '{
 set : {x: {
   a:A val,
   b: B val}
},
 set : {y: {
x:X val,
Y: Y val}
},
 update : {y: {
x:X val modified}
},
 delete : z
 }'
 #do a GET to view all the configured params
 curl http://localhost:8983/solr/collection1/config/params
 #or  GET with a specific name to get only one set of params
 curl http://localhost:8983/solr/collection1/config/params/x
 {code}
 This data will be stored in conf/params.json
 This is used requesttime and adding/editing params will not result in core 
 reload and it will have no impact on the performance 
 example usage http://localhost/solr/collection/select?useParams=x,y
 or it can be directly configured with a request handler as follows
 {code}
 requestHandler name=/dump1 class=DumpRequestHandler useParams=x/
 {code}
  {{useParams}} specified in request overrides the one specified in 
 {{requestHandler}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6985) AutoAddReplicas should support any directory factory backed by a shared filesystem

2015-01-15 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279360#comment-14279360
 ] 

Varun Thacker commented on SOLR-6985:
-

bq. Shouldn't the directory factory / directory have a method in its interface 
to indicate if the storage is shared, rather than the end user having to 
specify it in the configuration?

But I could be using MMapDir but the underlying FS is as NFS which can be 
shared. So there is no way a directory can know whether it is shared or not. 



 AutoAddReplicas should support any directory factory backed by a shared 
 filesystem
 --

 Key: SOLR-6985
 URL: https://issues.apache.org/jira/browse/SOLR-6985
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Attachments: SOLR-6985.patch


 Currently one can only use AutoAddReplicas with HdfsDirectoryFactory. 
 I should also be able to use any directory factory as long as my underlying 
 filesystem is shared. I could be using MMapDirectory factory and have an 
 underlying NFS shared Filesystem.
 We should make the 'isSharedStorage' param configurable in solrconfig. This 
 should be set to true by the user if their underlying FS is shared. Currently 
 'isSharedStorage' is hardcoded to true for HDFSDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6184) BooleanScorer should better deal with sparse clauses

2015-01-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279181#comment-14279181
 ] 

Robert Muir commented on LUCENE-6184:
-

yeah, its just a future idea, maybe relevant to the API change of returning int 
instead of boolean.

 BooleanScorer should better deal with sparse clauses
 

 Key: LUCENE-6184
 URL: https://issues.apache.org/jira/browse/LUCENE-6184
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6184.patch, LUCENE-6184.patch


 The way that BooleanScorer works looks like this:
 {code}
 for each (window of 2048 docs) {
   for each (optional scorer) {
 scorer.score(window)
   }
 }
 {code}
 This is not efficient for very sparse clauses (doc freq much lower than 
 maxDoc/2048) since we keep on scoring windows of documents that do not match 
 anything. BooleanScorer2 currently performs better in those cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6931) We should do a limited retry when using HttpClient.

2015-01-15 Thread Lindsay Martin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279206#comment-14279206
 ] 

Lindsay Martin commented on SOLR-6931:
--

Is it possible to apply this to the 4.10.x branch?  This would help us out with 
https://issues.apache.org/jira/browse/SOLR-6983

 We should do a limited retry when using HttpClient.
 ---

 Key: SOLR-6931
 URL: https://issues.apache.org/jira/browse/SOLR-6931
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6931.patch, SOLR-6931.patch, SOLR-6931.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6985) AutoAddReplicas should support any directory factory backed by a shared filesystem

2015-01-15 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6985:

Attachment: SOLR-6985.patch

Simple patch where you can configure a directory factory to specify if the 
underlying FS is shared. HDFSDirectoryFactory remains hardcoded to true.

Sample configuration - 

{code}
directoryFactory name=DirectoryFactory 
class=${solr.directoryFactory:solr.NIOFSDIrectoryFactory}
bool name=isSharedStoragetrue/bool
/directoryFactory
{code}


 AutoAddReplicas should support any directory factory backed by a shared 
 filesystem
 --

 Key: SOLR-6985
 URL: https://issues.apache.org/jira/browse/SOLR-6985
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Attachments: SOLR-6985.patch


 Currently one can only use AutoAddReplicas with HdfsDirectoryFactory. 
 I should also be able to use any directory factory as long as my underlying 
 filesystem is shared. I could be using MMapDirectory factory and have an 
 underlying NFS shared Filesystem.
 We should make the 'isSharedStorage' param configurable in solrconfig. This 
 should be set to true by the user if their underlying FS is shared. Currently 
 'isSharedStorage' is hardcoded to true for HDFSDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279157#comment-14279157
 ] 

ASF subversion and git services commented on LUCENE-6185:
-

Commit 1652244 from [~jpountz] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1652244 ]

LUCENE-6185: Fix IndexSearcher with threads to not collect documents out of 
order.

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279154#comment-14279154
 ] 

ASF subversion and git services commented on LUCENE-6185:
-

Commit 1652242 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1652242 ]

LUCENE-6185: Fix IndexSearcher with threads to not collect documents out of 
order.

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6185.
--
Resolution: Fixed

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6984) Solr commitwithin is not happening for deletebyId

2015-01-15 Thread sriram vaithianathan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sriram vaithianathan closed SOLR-6984.
--
Resolution: Duplicate

This issue is a duplicate of SOLR-5890. Hence closing it.

 Solr commitwithin is not happening for deletebyId
 -

 Key: SOLR-6984
 URL: https://issues.apache.org/jira/browse/SOLR-6984
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.6, Trunk
Reporter: sriram vaithianathan
 Fix For: 4.10.4, 5.0, Trunk

 Attachments: 4_10_3-SOLR-6984.patch, trunk-SOLR-6984.patch


 Hi All,
 Just found that solrj doesnot use commitwithin while using deletebyid. This 
 issue is discussed in 
 http://grokbase.com/t/lucene/solr-user/1275gkpntd/deletebyid-commitwithin-question
 Faced the same issue today and found that, in 
 org.apache.solr.client.solrj.request.UpdateRequest when new UpdateRequest is 
 created in getRoutes() method ( Line number 244 ), the setCommitWithin param 
 is not set in the urequest variable as it is done few lines above (Line 
 number 204) This causes setCommitWithin to revert to default value of -1 and 
 the commit does not happen. Tried setting that like,
 urequest.setCommitWithin(getCommitWithin()) and the feature is enabled from 
 SolrJ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6985) AutoAddReplicas should support any directory factory backed by a shared filesystem

2015-01-15 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279322#comment-14279322
 ] 

Ramkumar Aiyengar commented on SOLR-6985:
-

Shouldn't the directory factory / directory have a method in its interface to 
indicate if the storage is shared, rather than the end user having to specify 
it in the configuration?

 AutoAddReplicas should support any directory factory backed by a shared 
 filesystem
 --

 Key: SOLR-6985
 URL: https://issues.apache.org/jira/browse/SOLR-6985
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Attachments: SOLR-6985.patch


 Currently one can only use AutoAddReplicas with HdfsDirectoryFactory. 
 I should also be able to use any directory factory as long as my underlying 
 filesystem is shared. I could be using MMapDirectory factory and have an 
 underlying NFS shared Filesystem.
 We should make the 'isSharedStorage' param configurable in solrconfig. This 
 should be set to true by the user if their underlying FS is shared. Currently 
 'isSharedStorage' is hardcoded to true for HDFSDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6986) Modernize links on bottom right of Admin UI

2015-01-15 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-6986:


 Summary: Modernize links on bottom right of Admin UI
 Key: SOLR-6986
 URL: https://issues.apache.org/jira/browse/SOLR-6986
 Project: Solr
  Issue Type: Task
  Components: web gui
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 5.0


# The {{IRC Channel}} link goes to the #solr channel on freenode's web UI, but 
maybe should go to the IRC section of the website's resources page under 
Community
# The {{Community forum}} link goes to the moinmoin wiki's mailing lists page, 
but should go to the mailing list section of the website's resources page under 
Community
# The {{Solr Query Syntax}} link goes to the moinmoin wiki's query syntax page, 
but should instead go to the ref guide's page



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2487 - Still Failing

2015-01-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2487/

6 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([413B4C505E1F0840]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([413B4C505E1F0840]:0)


FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([413B4C505E1F0840:C0DDC2482940687C]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-6985) AutoAddReplicas should support any directory factory backed by a shared filesystem

2015-01-15 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-6985:
---

 Summary: AutoAddReplicas should support any directory factory 
backed by a shared filesystem
 Key: SOLR-6985
 URL: https://issues.apache.org/jira/browse/SOLR-6985
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor


Currently one can only use AutoAddReplicas with HdfsDirectoryFactory. 

I should also be able to use any directory factory as long as my underlying 
filesystem is shared. I could be using MMapDirectory factory and have an 
underlying NFS shared Filesystem.

We should make the 'isSharedStorage' param configurable in solrconfig. This 
should be set to true by the user if their underlying FS is shared. Currently 
'isSharedStorage' is hardcoded to true for HDFSDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_72) - Build # 4315 - Still Failing!

2015-01-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4315/
Java: 32bit/jdk1.7.0_72 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.testDistribSearch

Error Message:
Could not get expected value  A val for path [params, a] full output null

Stack Trace:
java.lang.AssertionError: Could not get expected value  A val for path [params, 
a] full output null
at 
__randomizedtesting.SeedInfo.seed([2B970CC80F51D13B:AA7182D0780EB107]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:259)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:133)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.doTest(TestSolrConfigHandlerCloud.java:75)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (LUCENE-6185) Fix IndexSearcher with threads to not collect documents out of order

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279145#comment-14279145
 ] 

ASF subversion and git services commented on LUCENE-6185:
-

Commit 1652239 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652239 ]

LUCENE-6185: Fix IndexSearcher with threads to not collect documents out of 
order.

 Fix IndexSearcher with threads to not collect documents out of order
 

 Key: LUCENE-6185
 URL: https://issues.apache.org/jira/browse/LUCENE-6185
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6185.patch


 When created with an executor, IndexSearcher searches all leaves in a 
 different task and eventually merges the results when all tasks are 
 completed. However, this merging logic involves a TopFieldCollector which is 
 collected out-of-order. I think it should just use TopDocs.merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6183) Avoid re-compression on stored fields merge

2015-01-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279171#comment-14279171
 ] 

Adrien Grand commented on LUCENE-6183:
--

+1

 Avoid re-compression on stored fields merge
 ---

 Key: LUCENE-6183
 URL: https://issues.apache.org/jira/browse/LUCENE-6183
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6183.patch, LUCENE-6183.patch


 We removed this optimization before, it didnt really work right because it 
 required things to be aligned. 
 But I think we can do it simpler and safer. This recompression is a big cpu 
 hog in merge, and limits our options compression-wise (especially ones like 
 LZ4-HC that are only slower at write-time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5735) Faceting for DateRangePrefixTree

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279192#comment-14279192
 ] 

ASF subversion and git services commented on LUCENE-5735:
-

Commit 1652254 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1652254 ]

LUCENE-5735: move PrefixTreeFacetCounter up a package

 Faceting for DateRangePrefixTree
 

 Key: LUCENE-5735
 URL: https://issues.apache.org/jira/browse/LUCENE-5735
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.x

 Attachments: LUCENE-5735.patch, LUCENE-5735.patch, 
 LUCENE-5735__PrefixTreeFacetCounter.patch


 The newly added DateRangePrefixTree (DRPT) encodes terms in a fashion 
 amenable to faceting by meaningful time buckets. The motivation for this 
 feature is to efficiently populate a calendar bar chart or 
 [heat-map|http://bl.ocks.org/mbostock/4063318]. It's not hard if you have 
 date instances like many do but it's challenging for date ranges.
 Internally this is going to iterate over the terms using seek/next with 
 TermsEnum as appropriate.  It should be quite efficient; it won't need any 
 special caches. I should be able to re-use SPT traversal code in 
 AbstractVisitingPrefixTreeFilter.  If this goes especially well; the 
 underlying implementation will be re-usable for geospatial heat-map faceting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5735) Faceting for DateRangePrefixTree

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279292#comment-14279292
 ] 

ASF subversion and git services commented on LUCENE-5735:
-

Commit 1652270 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1652270 ]

LUCENE-5735: don't run test 10,000 times. if this is needed please change back 
but add Nightly

 Faceting for DateRangePrefixTree
 

 Key: LUCENE-5735
 URL: https://issues.apache.org/jira/browse/LUCENE-5735
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.x

 Attachments: LUCENE-5735.patch, LUCENE-5735.patch, 
 LUCENE-5735__PrefixTreeFacetCounter.patch


 The newly added DateRangePrefixTree (DRPT) encodes terms in a fashion 
 amenable to faceting by meaningful time buckets. The motivation for this 
 feature is to efficiently populate a calendar bar chart or 
 [heat-map|http://bl.ocks.org/mbostock/4063318]. It's not hard if you have 
 date instances like many do but it's challenging for date ranges.
 Internally this is going to iterate over the terms using seek/next with 
 TermsEnum as appropriate.  It should be quite efficient; it won't need any 
 special caches. I should be able to re-use SPT traversal code in 
 AbstractVisitingPrefixTreeFilter.  If this goes especially well; the 
 underlying implementation will be re-usable for geospatial heat-map faceting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6183) Avoid re-compression on stored fields merge

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279291#comment-14279291
 ] 

ASF subversion and git services commented on LUCENE-6183:
-

Commit 1652269 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1652269 ]

LUCENE-6183: Avoid re-compression on stored fields merge

 Avoid re-compression on stored fields merge
 ---

 Key: LUCENE-6183
 URL: https://issues.apache.org/jira/browse/LUCENE-6183
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6183.patch, LUCENE-6183.patch


 We removed this optimization before, it didnt really work right because it 
 required things to be aligned. 
 But I think we can do it simpler and safer. This recompression is a big cpu 
 hog in merge, and limits our options compression-wise (especially ones like 
 LZ4-HC that are only slower at write-time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6183) Avoid re-compression on stored fields merge

2015-01-15 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6183.
-
Resolution: Fixed

 Avoid re-compression on stored fields merge
 ---

 Key: LUCENE-6183
 URL: https://issues.apache.org/jira/browse/LUCENE-6183
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6183.patch, LUCENE-6183.patch


 We removed this optimization before, it didnt really work right because it 
 required things to be aligned. 
 But I think we can do it simpler and safer. This recompression is a big cpu 
 hog in merge, and limits our options compression-wise (especially ones like 
 LZ4-HC that are only slower at write-time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6183) Avoid re-compression on stored fields merge

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279317#comment-14279317
 ] 

ASF subversion and git services commented on LUCENE-6183:
-

Commit 1652275 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652275 ]

LUCENE-6183: Avoid re-compression on stored fields merge

 Avoid re-compression on stored fields merge
 ---

 Key: LUCENE-6183
 URL: https://issues.apache.org/jira/browse/LUCENE-6183
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6183.patch, LUCENE-6183.patch


 We removed this optimization before, it didnt really work right because it 
 required things to be aligned. 
 But I think we can do it simpler and safer. This recompression is a big cpu 
 hog in merge, and limits our options compression-wise (especially ones like 
 LZ4-HC that are only slower at write-time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2488 - Still Failing

2015-01-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2488/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:13843/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:13843/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([D4F47C4C7D290883:5512F2540A7668BF]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279509#comment-14279509
 ] 

ASF subversion and git services commented on LUCENE-5569:
-

Commit 1652310 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652310 ]

LUCENE-5569: Add MIGRATE entry for 5.0

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-15 Thread Boon Low (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261237#comment-14261237
 ] 

Boon Low edited comment on SOLR-6648 at 1/15/15 11:37 PM:
--

I have created a patche for this, in Lucene and Solr, so that highlighting and 
the Boolean matching clause can be configured in solrconfig.xml, for 
*BlendedInfixSuggester* and *AnalyzingInfixSuggester*:

{code:xml}
   lst name=suggester
str name=name../str
str name=lookupImplBlendedInfixLookupFactory/str
str name=dictionaryImplDocumentDictionaryFactory/str
 ...
str name=allTermsRequiredfalse/str
str name=highlighttrue/str
 /lst
{code}

If not configured, both 'highlighting' and 'allTermsRequired' default to *true.


was (Author: boonious):
I have created a patche for this, in Lucene and Solr, so that highlighting and 
the Boolean matching clause can be configured in solrconfig.xml, for 
*BlendedInfixSuggester* and *AnalyzingInfixSuggester*:

{code:xml}
   lst name=suggester
str name=name../str
str name=lookupImplBlendedInfixLookupFactory/str
str name=dictionaryImplDocumentDictionaryFactory/str
 ...
str name=allTermsRequiredfalse/str
str name=highlightingtrue/str
 /lst
{code}

If not configured, both 'highlighting' and 'allTermsRequired' default to *true.

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Error building PyLucene with added classes

2015-01-15 Thread Daniel Duma
Thanks Andi,

I'd love to do it the proper way, but I have no idea how to go about
building my own jar files, much less where to put that -jar parameter for
jcc. Is there a tutorial on this somewhere?

Cheers,
Daniel

On 15 January 2015 at 18:01, Andi Vajda va...@apache.org wrote:


  On Jan 15, 2015, at 09:31, Daniel Duma danield...@gmail.com wrote:
 
  Update: never mind, I was placing the files in the wrong folder. Solved!

 Good, that was going to be my first question since you didn't tell us
 anything about your new class(es).

 The proper way to add things to lucene and pylucene is to put your stuff
 into your own package and to create a jar file from that package. Then, to
 add it to pylucene, you just add it to the list of jar files its build
 processes, with jcc's --jar parameter.

 Andi..

 
  Thanks,
  Daniel
 
  On 15 January 2015 at 17:00, Daniel Duma danield...@gmail.com wrote:
 
  Hi all,
 
  I have added some classes that I need to Lucene and now I cannot build
  PyLucene 4.9.
 
  Everything runs fine inside Eclipse, but when copying the .java files to
  the corresponding folders inside the PyLucene source directory and
  rebuilding, I get this error:
 
 
 
 
 
 
 *C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:
  error: cannot find symbolsymbol: class QueryParser*Here is the full
 output:
 
  ivy-configure:
  [ivy:configure] :: Apache Ivy 2.4.0-rc1 - 20140315220245 ::
  http://ant.apache.org/ivy/ ::
  [ivy:configure] :: loading settings :: file =
  C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\ivy-settings.xml
 
  resolve:
 
  init:
 
  -clover.disable:
 
  -clover.load:
 
  -clover.classpath:
 
  -clover.setup:
 
  clover:
 
  compile-core:
 [javac] Compiling 734 source files to
  C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\build\core\classes\java
 [javac]
 
 
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:15:
  error: cannot find symbol
 [javac] MultiFieldQueryParser {
 [javac] ^
 [javac]   symbol: class MultiFieldQueryParser
 [javac]
 
 
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:
  error: cannot find symbol
 [javac] public class FieldAgnosticQueryParser extends QueryParser {
 [javac]   ^
 [javac]   symbol: class QueryParser
 [javac]
 
 
 C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:23:
  error: method does not override or implement a method from a supertype
 [javac] @Override
 [javac] ^
 
  BUILD FAILED
  C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:694:
 The
  following error occurred while executing this line:
  C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:480:
 The
  following error occurred while executing this line:
  C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:1755:
  Compile failed; see the compiler error output for details.
 
  PyLucene builds just fine without the added files, and I have checked
 and
  the files it can't find are where they should be!
 
  Cheers,
  Daniel
 



[jira] [Updated] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-15 Thread Boon Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boon Low updated SOLR-6648:
---
Attachment: SOLR-6648.patch

Hey Tomás, at last here is a new patch (w.r.t. trunk 14/01/14) containing unit 
tests. Instead of creating a new test case, I have updated 
*TestAnalyzeInfixSuggestions*' single and multiple tests with suggester tests 
based on the new SolrSuggester (cf. Suggester) in default settings 
(allTermsRequired, highlight = true), plus 2 new tests for 
*allTermsRequired=false*, *highlight=false* scenarios.

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6985) AutoAddReplicas should support any directory factory backed by a shared filesystem

2015-01-15 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279515#comment-14279515
 ] 

Ramkumar Aiyengar commented on SOLR-6985:
-

I get it, in such a case probably just directories which need this hint should 
provide this configuration option. Btw, is the patch attached complete?

 AutoAddReplicas should support any directory factory backed by a shared 
 filesystem
 --

 Key: SOLR-6985
 URL: https://issues.apache.org/jira/browse/SOLR-6985
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Attachments: SOLR-6985.patch


 Currently one can only use AutoAddReplicas with HdfsDirectoryFactory. 
 I should also be able to use any directory factory as long as my underlying 
 filesystem is shared. I could be using MMapDirectory factory and have an 
 underlying NFS shared Filesystem.
 We should make the 'isSharedStorage' param configurable in solrconfig. This 
 should be set to true by the user if their underlying FS is shared. Currently 
 'isSharedStorage' is hardcoded to true for HDFSDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279513#comment-14279513
 ] 

ASF subversion and git services commented on LUCENE-5569:
-

Commit 1652311 from [~rjernst] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1652311 ]

LUCENE-5569: Add MIGRATE entry for 5.0 (merged 1652310)

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-15 Thread Boon Low (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279540#comment-14279540
 ] 

Boon Low edited comment on SOLR-6648 at 1/15/15 11:49 PM:
--

Hey Tomás, at last here is a new patch (w.r.t. trunk 14/01/15) containing unit 
tests. Instead of creating a new test case, I have updated 
*TestAnalyzeInfixSuggestions*' single and multiple tests with additional tests 
based on the new SolrSuggester (cf. Suggester) in default allTermsRequired, 
highlight config settings (true), plus 2 new tests for 
*allTermsRequired=false*, *highlight=false* scenarios.


was (Author: boonious):
Hey Tomás, at last here is a new patch (w.r.t. trunk 14/01/15) containing unit 
tests. Instead of creating a new test case, I have updated 
*TestAnalyzeInfixSuggestions*' single and multiple tests with suggester tests 
based on the new SolrSuggester (cf. Suggester) in default allTermsRequired, 
highlight config settings (true), plus 2 new tests for 
*allTermsRequired=false*, *highlight=false* scenarios.

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.0-Linux (64bit/jdk1.8.0_40-ea-b20) - Build # 4 - Still Failing!

2015-01-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.0-Linux/4/
Java: 64bit/jdk1.8.0_40-ea-b20 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
Could not successfully add blob after 150 attempts. Expecting 2 items. time 
elapsed 15,617  output  for url is {   responseHeader:{ status:0, 
QTime:0},   response:{ numFound:1, start:0, docs:[{   
  id:test/1, md5:2c974205bc83615352406a7171c9862, 
blobName:test, version:1, 
timestamp:2015-01-15T22:02:27.762Z, size:5317}]}}

Stack Trace:
java.lang.AssertionError: Could not successfully add blob after 150 attempts. 
Expecting 2 items. time elapsed 15,617  output  for url is {
  responseHeader:{
status:0,
QTime:0},
  response:{
numFound:1,
start:0,
docs:[{
id:test/1,
md5:2c974205bc83615352406a7171c9862,
blobName:test,
version:1,
timestamp:2015-01-15T22:02:27.762Z,
size:5317}]}}
at 
__randomizedtesting.SeedInfo.seed([CFBA9475AD91E3C1:4E5C1A6DDACE83FD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:150)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:114)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:70)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279427#comment-14279427
 ] 

ASF subversion and git services commented on LUCENE-5569:
-

Commit 1652295 from [~rjernst] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1652295 ]

LUCENE-5569: Backport changes entry (merged 1652294)

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2015-01-15 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279444#comment-14279444
 ] 

Ryan Ernst commented on LUCENE-5569:


[~varunthacker] This was an oversight when doing the {{branch_5x}} backport, 
because it was done independently (without merge).  I've copied the changes 
entry back to {{branch_5x}} and {{lucene_solr_5_0}}.

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2015-01-15 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279522#comment-14279522
 ] 

Ryan Ernst commented on LUCENE-5569:


Good idea Uwe, I've added a migrate entry.

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279423#comment-14279423
 ] 

ASF subversion and git services commented on LUCENE-5569:
-

Commit 1652294 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1652294 ]

LUCENE-5569: Backport changes entry

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2015-01-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279500#comment-14279500
 ] 

Uwe Schindler commented on LUCENE-5569:
---

Do we have it in 5.0's MIGRATE.txt? Maybe we should place it there, because 
this may be a major rename for people with lots of custom code.

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6987) SSL support for MiniSolrCloudCluster

2015-01-15 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-6987:


 Summary: SSL support for MiniSolrCloudCluster
 Key: SOLR-6987
 URL: https://issues.apache.org/jira/browse/SOLR-6987
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Gregory Chanan
Assignee: Gregory Chanan


SOLR-3854 added SSL support, but didn't add support to the 
MiniSolrCloudCluster.  The existing TestMiniSolrCloudCluster doesn't inherit 
from SolrTestCaseJ4, so the test never failed or required SuppressSSL.

We should update the MiniSolrCloudCluster so dependents can use it to test SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-15 Thread Boon Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boon Low updated SOLR-6648:
---
Attachment: (was: SOLR-6648.patch)

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-15 Thread Boon Low (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279540#comment-14279540
 ] 

Boon Low edited comment on SOLR-6648 at 1/15/15 11:48 PM:
--

Hey Tomás, at last here is a new patch (w.r.t. trunk 14/01/15) containing unit 
tests. Instead of creating a new test case, I have updated 
*TestAnalyzeInfixSuggestions*' single and multiple tests with suggester tests 
based on the new SolrSuggester (cf. Suggester) in default allTermsRequired, 
highlight config settings (true), plus 2 new tests for 
*allTermsRequired=false*, *highlight=false* scenarios.


was (Author: boonious):
Hey Tomás, at last here is a new patch (w.r.t. trunk 14/01/15) containing unit 
tests. Instead of creating a new test case, I have updated 
*TestAnalyzeInfixSuggestions*' single and multiple tests with suggester tests 
based on the new SolrSuggester (cf. Suggester) in default settings 
(allTermsRequired, highlight = true), plus 2 new tests for 
*allTermsRequired=false*, *highlight=false* scenarios.

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-15 Thread Boon Low (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279540#comment-14279540
 ] 

Boon Low edited comment on SOLR-6648 at 1/15/15 11:46 PM:
--

Hey Tomás, at last here is a new patch (w.r.t. trunk 14/01/15) containing unit 
tests. Instead of creating a new test case, I have updated 
*TestAnalyzeInfixSuggestions*' single and multiple tests with suggester tests 
based on the new SolrSuggester (cf. Suggester) in default settings 
(allTermsRequired, highlight = true), plus 2 new tests for 
*allTermsRequired=false*, *highlight=false* scenarios.


was (Author: boonious):
Hey Tomás, at last here is a new patch (w.r.t. trunk 14/01/14) containing unit 
tests. Instead of creating a new test case, I have updated 
*TestAnalyzeInfixSuggestions*' single and multiple tests with suggester tests 
based on the new SolrSuggester (cf. Suggester) in default settings 
(allTermsRequired, highlight = true), plus 2 new tests for 
*allTermsRequired=false*, *highlight=false* scenarios.

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6987) SSL support for MiniSolrCloudCluster

2015-01-15 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-6987:
-
Attachment: SOLR-6987.patch

Here's a patch and a small test.

 SSL support for MiniSolrCloudCluster
 

 Key: SOLR-6987
 URL: https://issues.apache.org/jira/browse/SOLR-6987
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-6987.patch


 SOLR-3854 added SSL support, but didn't add support to the 
 MiniSolrCloudCluster.  The existing TestMiniSolrCloudCluster doesn't inherit 
 from SolrTestCaseJ4, so the test never failed or required SuppressSSL.
 We should update the MiniSolrCloudCluster so dependents can use it to test 
 SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2489 - Still Failing

2015-01-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2489/

6 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([77E431020579422E]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([77E431020579422E]:0)


FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([77E431020579422E:F602BF1A72262212]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6984) Solr commitwithin is not happening for deletebyId

2015-01-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279146#comment-14279146
 ] 

Erick Erickson commented on SOLR-6984:
--

So can we close this patch as duplicate of 5890?

 Solr commitwithin is not happening for deletebyId
 -

 Key: SOLR-6984
 URL: https://issues.apache.org/jira/browse/SOLR-6984
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.6, Trunk
Reporter: sriram vaithianathan
 Fix For: 4.10.4, 5.0, Trunk

 Attachments: 4_10_3-SOLR-6984.patch, trunk-SOLR-6984.patch


 Hi All,
 Just found that solrj doesnot use commitwithin while using deletebyid. This 
 issue is discussed in 
 http://grokbase.com/t/lucene/solr-user/1275gkpntd/deletebyid-commitwithin-question
 Faced the same issue today and found that, in 
 org.apache.solr.client.solrj.request.UpdateRequest when new UpdateRequest is 
 created in getRoutes() method ( Line number 244 ), the setCommitWithin param 
 is not set in the urequest variable as it is done few lines above (Line 
 number 204) This causes setCommitWithin to revert to default value of -1 and 
 the commit does not happen. Tried setting that like,
 urequest.setCommitWithin(getCommitWithin()) and the feature is enabled from 
 SolrJ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6183) Avoid re-compression on stored fields merge

2015-01-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279176#comment-14279176
 ] 

Michael McCandless commented on LUCENE-6183:


+1

 Avoid re-compression on stored fields merge
 ---

 Key: LUCENE-6183
 URL: https://issues.apache.org/jira/browse/LUCENE-6183
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6183.patch, LUCENE-6183.patch


 We removed this optimization before, it didnt really work right because it 
 required things to be aligned. 
 But I think we can do it simpler and safer. This recompression is a big cpu 
 hog in merge, and limits our options compression-wise (especially ones like 
 LZ4-HC that are only slower at write-time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6184) BooleanScorer should better deal with sparse clauses

2015-01-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279175#comment-14279175
 ] 

Adrien Grand commented on LUCENE-6184:
--

Yes, I think this way we could handle disjunctions in BooleanScorer (including 
minShouldMatch)! But this should be a separate issue?

 BooleanScorer should better deal with sparse clauses
 

 Key: LUCENE-6184
 URL: https://issues.apache.org/jira/browse/LUCENE-6184
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6184.patch, LUCENE-6184.patch


 The way that BooleanScorer works looks like this:
 {code}
 for each (window of 2048 docs) {
   for each (optional scorer) {
 scorer.score(window)
   }
 }
 {code}
 This is not efficient for very sparse clauses (doc freq much lower than 
 maxDoc/2048) since we keep on scoring windows of documents that do not match 
 anything. BooleanScorer2 currently performs better in those cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6986) Modernize links on bottom right of Admin UI

2015-01-15 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279739#comment-14279739
 ] 

Shawn Heisey commented on SOLR-6986:


I've been working on a wiki page that has considerably more detail about the 
IRC channels than anything else.

https://wiki.apache.org/solr/IRCChannels

What I'd like to have happen is the IRC link in the admin UI point there, but 
I'm open to other suggestions.  The main thing I want to do is educate users in 
the basics of IRC etiquette before they are dropped into the channel ... that 
etiquette is ingrained in the veterans and beginners will usually have a bad 
experience without knowing a few things beforehand.


 Modernize links on bottom right of Admin UI
 ---

 Key: SOLR-6986
 URL: https://issues.apache.org/jira/browse/SOLR-6986
 Project: Solr
  Issue Type: Task
  Components: web gui
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 5.0


 # The {{IRC Channel}} link goes to the #solr channel on freenode's web UI, 
 but maybe should go to the IRC section of the website's resources page under 
 Community
 # The {{Community forum}} link goes to the moinmoin wiki's mailing lists 
 page, but should go to the mailing list section of the website's resources 
 page under Community
 # The {{Solr Query Syntax}} link goes to the moinmoin wiki's query syntax 
 page, but should instead go to the ref guide's page



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Error building PyLucene with added classes

2015-01-15 Thread Andi Vajda


 Hi Daniel,

On Thu, 15 Jan 2015, Daniel Duma wrote:


Thanks Andi,

I'd love to do it the proper way, but I have no idea how to go about
building my own jar files,


http://docs.oracle.com/javase/tutorial/deployment/jar/build.html


much less where to put that -jar parameter for
jcc. Is there a tutorial on this somewhere?


That parameter is visible line 319 of PyLucene 4.9's Makefile.
Look for --jar.

Andi..



Cheers,
Daniel

On 15 January 2015 at 18:01, Andi Vajda va...@apache.org wrote:




On Jan 15, 2015, at 09:31, Daniel Duma danield...@gmail.com wrote:

Update: never mind, I was placing the files in the wrong folder. Solved!


Good, that was going to be my first question since you didn't tell us
anything about your new class(es).

The proper way to add things to lucene and pylucene is to put your stuff
into your own package and to create a jar file from that package. Then, to
add it to pylucene, you just add it to the list of jar files its build
processes, with jcc's --jar parameter.

Andi..



Thanks,
Daniel


On 15 January 2015 at 17:00, Daniel Duma danield...@gmail.com wrote:

Hi all,

I have added some classes that I need to Lucene and now I cannot build
PyLucene 4.9.

Everything runs fine inside Eclipse, but when copying the .java files to
the corresponding folders inside the PyLucene source directory and
rebuilding, I get this error:







*C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:

error: cannot find symbolsymbol: class QueryParser*Here is the full

output:


ivy-configure:
[ivy:configure] :: Apache Ivy 2.4.0-rc1 - 20140315220245 ::
http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file =
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\ivy-settings.xml

resolve:

init:

-clover.disable:

-clover.load:

-clover.classpath:

-clover.setup:

clover:

compile-core:
   [javac] Compiling 734 source files to
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\build\core\classes\java
   [javac]



C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:15:

error: cannot find symbol
   [javac] MultiFieldQueryParser {
   [javac] ^
   [javac]   symbol: class MultiFieldQueryParser
   [javac]



C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticQueryParser.java:12:

error: cannot find symbol
   [javac] public class FieldAgnosticQueryParser extends QueryParser {
   [javac]   ^
   [javac]   symbol: class QueryParser
   [javac]



C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\core\src\java\org\apache\lucene\queryparser\classic\FieldAgnosticMultiFieldQueryParser.java:23:

error: method does not override or implement a method from a supertype
   [javac] @Override
   [javac] ^

BUILD FAILED
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:694:

The

following error occurred while executing this line:
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:480:

The

following error occurred while executing this line:
C:\NLP\pylucene-4.9.0-0\lucene-java-4.9.0\lucene\common-build.xml:1755:
Compile failed; see the compiler error output for details.

PyLucene builds just fine without the added files, and I have checked

and

the files it can't find are where they should be!

Cheers,
Daniel







[JENKINS] Lucene-Solr-5.0-Linux (32bit/jdk1.8.0_40-ea-b20) - Build # 5 - Still Failing!

2015-01-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.0-Linux/5/
Java: 32bit/jdk1.8.0_40-ea-b20 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.testDistribSearch

Error Message:
Could not get expected value  CY val modified for path [response, params, y, c] 
full output {   responseHeader:{ status:0, QTime:0},   
response:{ znodeVersion:0, params:{x:{ a:A val, 
b:B val, :{v:0}

Stack Trace:
java.lang.AssertionError: Could not get expected value  CY val modified for 
path [response, params, y, c] full output {
  responseHeader:{
status:0,
QTime:0},
  response:{
znodeVersion:0,
params:{x:{
a:A val,
b:B val,
:{v:0}
at 
__randomizedtesting.SeedInfo.seed([847BAD3742249AD8:59D232F357BFAE4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:259)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:215)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.doTest(TestSolrConfigHandlerCloud.java:75)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_25) - Build # 4419 - Still Failing!

2015-01-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4419/
Java: 64bit/jdk1.8.0_25 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:277)
at 
org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_72) - Build # 4316 - Still Failing!

2015-01-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4316/
Java: 64bit/jdk1.7.0_72 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:54890/repfacttest_c8n_1x3_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:54890/repfacttest_c8n_1x3_shard1_replica2
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:276)
at 
org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor61.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2490 - Still Failing

2015-01-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2490/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([F8A9CB7B25CA95E7:794F45635295F5DB]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

  1   2   >