[jira] [Resolved] (LUCENE-6348) Incorrect results from UAX_URL_EMAIL tokenizer

2015-03-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-6348.

Resolution: Not a Problem
  Assignee: Steve Rowe

Hi Benji,

This is the intended behavior.

Before LUCENE-5897/LUCENE-5400 were committed in Lucene 4.9.1, tokenization 
rules could match any length tokens, and ones that were larger than 
max_token_length would simply be (silently) dropped, not truncated.

From Lucene 4.9.1 onward, StandardTokenizer and UAX29URLEmailTokenizer rules 
are not allowed to match against more than max_token_length characters, so URL 
prefixes will match, but the non-matched remaining characters of the URL will 
be subject to all of the other tokenization rules, resulting in behavior like 
you're seeing.

To get the behavior you want, increase the max_token_length to the maximum 
token length you expect to encounter, then add a TruncateTokenFilter, set to 
truncate tokens to your current max_token_length. 

 Incorrect results from UAX_URL_EMAIL tokenizer
 --

 Key: LUCENE-6348
 URL: https://issues.apache.org/jira/browse/LUCENE-6348
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
 Environment: Elasticsearch 1.3.4 on Ubuntu 14.04.2
Reporter: Benji Smith
Assignee: Steve Rowe

 I'm using an analyzer based on the UAX_URL_EMAIL, with a maximum token length 
 of 64 characters. I expect the analyzer to discard any text in the URL beyond 
 those 64 characters, but the actual results yield ordinary terms from the 
 tail-end of the URL.
 For example, 
 {code}
 curl -XGET 
 http://localhost:9200/my_index/_analyze?analyzer=uax_url_email_analyzer -d 
 hey, check out 
 http://edge.org/conversation/yuval_noah_harari-daniel_kahneman-death-is-optional
  for some light reading.
 {code}
 The results look like this:
 {code}
 {
 tokens: [
 {
 token: hey,
 start_offset: 0,
 end_offset: 3,
 type: ALPHANUM,
 position: 1
 },
 {
 token: check,
 start_offset: 5,
 end_offset: 10,
 type: ALPHANUM,
 position: 2
 },
 {
 token: out,
 start_offset: 11,
 end_offset: 14,
 type: ALPHANUM,
 position: 3
 },
 {
 token: 
 http://edge.org/conversation/yuval_noah_harari-daniel_kahneman-d;,
 start_offset: 15,
 end_offset: 79,
 type: URL,
 position: 4
 },
 {
 token: eath,
 start_offset: 79,
 end_offset: 83,
 type: ALPHANUM,
 position: 5
 },
 {
 token: is,
 start_offset: 84,
 end_offset: 86,
 type: ALPHANUM,
 position: 6
 },
 {
 token: optional,
 start_offset: 87,
 end_offset: 95,
 type: ALPHANUM,
 position: 7
 },
 {
 token: for,
 start_offset: 96,
 end_offset: 99,
 type: ALPHANUM,
 position: 8
 },
 {
 token: some,
 start_offset: 100,
 end_offset: 104,
 type: ALPHANUM,
 position: 9
 },
 {
 token: light,
 start_offset: 105,
 end_offset: 110,
 type: ALPHANUM,
 position: 10
 },
 {
 token: reading,
 start_offset: 111,
 end_offset: 118,
 type: ALPHANUM,
 position: 11
 }
 ]
 }
 {code}
 The term from the extracted URL is correct, and correctly truncated at 64 
 characters. But as you can see, the analysis pipeline also creates three 
 spurious terms [ eath, is optional ] which come from the discarded 
 portion of the URL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7195) bin/solr script thinks port 8983 is in use, when in fact it is 18983 that is in use

2015-03-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351065#comment-14351065
 ] 

Shawn Heisey commented on SOLR-7195:


I think -w will work.  It's not a feature limited to gnu grep.  I looked at the 
man page on Solaris, and it is supported.  Google says it's supported on 
OpenBSD, so it's probably also supported on all the other BSD variants.

I would be interested in knowing whether there are any *nix systems where -w is 
not supported on the native grep, and how common those systems are.

 bin/solr script thinks port 8983 is in use, when in fact it is 18983 that is 
 in use
 ---

 Key: SOLR-7195
 URL: https://issues.apache.org/jira/browse/SOLR-7195
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
 Environment: Linux bigindy5 3.10.0-123.9.2.el7.x86_64 #1 SMP Tue Oct 
 28 18:05:26 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7195.patch, SOLR-7195.patch


 I'm trying to start solr instance using the bin/solr script, but it is saying 
 that port 8983 is in use.  It's not in use ... but I am using 18983 for the 
 JMX port on another copy of Solr (listen port is 8982), and this is what is 
 being detected.
 [solr@bigindy5 solr]$ lsof -i -Pn | grep 8983
 java21609 solr   12u  IPv6 11401290  0t0  TCP *:18983 (LISTEN)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: unsubscribe

2015-03-06 Thread Erick Erickson
Please look at the unsubscribe link (and the associated problems
link here: http://lucene.apache.org/solr/resources.html

Note, you _must_ use the exact e-mail you signed up with.

Best,
Erick

On Fri, Mar 6, 2015 at 12:43 PM,  ganesh.ya...@sungard.com wrote:
 unsubscribe

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_76) - Build # 4420 - Still Failing!

2015-03-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4420/
Java: 32bit/jdk1.7.0_76 -client -XX:+UseParallelGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process.

   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1\conf
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 9E92522925B11B6C-001: java.nio.file.DirectoryNotEmptyException: 

[jira] [Comment Edited] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351042#comment-14351042
 ] 

Paul Elschot edited comment on LUCENE-6328 at 3/6/15 10:36 PM:
---

OTOH Weight already has this:
{code}
public abstract Scorer scorer(LeafReaderContext context, Bits acceptDocs) 
throws IOException;
{code}

This means the method that returns a (subclass of) DocIdSet gets a 
LeafReaderContext argument, which means that the Query-Segment split is almost 
there: This method would need to be split into a method that returns a 
(subclass of a) DocIdSet and a method that returns a (subclass of a) DISI.


was (Author: paul.elsc...@xs4all.nl):
OTOH Weight already has this:
{code}
public abstract Scorer scorer(LeafReaderContext context, Bits acceptDocs) 
throws IOException;
{code}

So the method that returns a (subclass of) DocIdSet gets a LeafReaderContext 
argument, which means that the Query-Segment split is already there.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-06 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-6339:
-
Attachment: LUCENE-6339.patch

Thanks [~mikemccand] and [~simonw] for the feedback!
{quote}
When I try to ant test with the patch on current 5.x some things are
angry
{quote}
This is fixed.
Hmm interestingly enough those errors do not show up for me, using java8.

Updated Patch:
* removed {{private boolean isReservedInputCharacter(char c)}} and moved 
reserved input char check to {{toAutomaton(CharSequence key)}}
* use {{CodecUtil.checkIndexHeader}}  {{CodecUtil.writeIndexHeader}} for all 
files in custom postings format
* use {{if (success== false)}} instead of {{if(!success)}}
* proper sync for loading FSTs concurrently
* added {{TopSuggestDocs.merge}} method
* make sure {{CompletionFieldsConsumer#close()}} and 
{{CompletionFieldsProducer#close()}} properly handle closing resources
* removed {{SegmentLookup}} interface; use {{NRTSuggester}} directly
* fixed weight check to not allow negative weights; allow long values
* removed {{FSTBuilder}} and made {{NRTSuggesterBuilder}}  
{{CompletionTokenStream}} package-private

Still TODO:
* consolidate {{AutomatonUtil}} and {{TokenStreamToAutomaton}}
* make {{CompletionAnalyzer}} immutable
* remove use of extra {{InputStreamDataInput}} in {{CompletionTermWriter#parse}}
* test loading multiple FSTs concurrently
* more unit tests

 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch, LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return new 
 CompletionPostingsFormat(super.getPostingsFormatForField(field));
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 Hits are collected in descending order of the suggestion's weight 
 {code:java}
 // full options for TopSuggestDocs (TopDocs)
 TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
 // full options for Collector
 // note: only collects does not score
 void suggest(String field, CharSequence key, int maxNumPerLeaf, Filter 
 filter, Collector collector)
 {code}
 h4. Analyzer
 *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
 suggest field only parameters. 
 {code:java}
 CompletionAnalyzer completionAnalyzer = new CompletionAnalyzer(analyzer);
 completionAnalyzer.setPreserveSep(..)
 completionAnalyzer.setPreservePositionsIncrements(..)
 completionAnalyzer.setMaxGraphExpansions(..)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_31) - Build # 4528 - Still Failing!

2015-03-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4528/
Java: 64bit/jdk1.8.0_31 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:53533/repfacttest_c8n_1x3_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:53533/repfacttest_c8n_1x3_shard1_replica1
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:597)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:920)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:811)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:754)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:284)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2728 - Still Failing

2015-03-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2728/

6 tests failed.
REGRESSION:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:17511/a_bq/sj/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:17511/a_bq/sj/collection1
at 
__randomizedtesting.SeedInfo.seed([79FE4D6EE6022376:F1AA72B448FE4E8E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:565)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:211)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:556)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:604)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:565)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Comment Edited] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351042#comment-14351042
 ] 

Paul Elschot edited comment on LUCENE-6328 at 3/6/15 10:38 PM:
---

Weight already has this:
{code}
public abstract Scorer scorer(LeafReaderContext context, Bits acceptDocs) 
throws IOException;
{code}

This means the method that returns a (subclass of) DocIdSet gets a 
LeafReaderContext argument, which means that the Query-Segment split is almost 
there: This method would need to be split into a method that returns a 
(subclass of a) DocIdSet and a method that returns a (subclass of a) DISI.


was (Author: paul.elsc...@xs4all.nl):
OTOH Weight already has this:
{code}
public abstract Scorer scorer(LeafReaderContext context, Bits acceptDocs) 
throws IOException;
{code}

This means the method that returns a (subclass of) DocIdSet gets a 
LeafReaderContext argument, which means that the Query-Segment split is almost 
there: This method would need to be split into a method that returns a 
(subclass of a) DocIdSet and a method that returns a (subclass of a) DISI.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6348) Incorrect results from UAX_URL_EMAIL tokenizer

2015-03-06 Thread Benji Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351092#comment-14351092
 ] 

Benji Smith commented on LUCENE-6348:
-

Gotcha. Thanks for your help!

 Incorrect results from UAX_URL_EMAIL tokenizer
 --

 Key: LUCENE-6348
 URL: https://issues.apache.org/jira/browse/LUCENE-6348
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
 Environment: Elasticsearch 1.3.4 on Ubuntu 14.04.2
Reporter: Benji Smith
Assignee: Steve Rowe

 I'm using an analyzer based on the UAX_URL_EMAIL, with a maximum token length 
 of 64 characters. I expect the analyzer to discard any text in the URL beyond 
 those 64 characters, but the actual results yield ordinary terms from the 
 tail-end of the URL.
 For example, 
 {code}
 curl -XGET 
 http://localhost:9200/my_index/_analyze?analyzer=uax_url_email_analyzer -d 
 hey, check out 
 http://edge.org/conversation/yuval_noah_harari-daniel_kahneman-death-is-optional
  for some light reading.
 {code}
 The results look like this:
 {code}
 {
 tokens: [
 {
 token: hey,
 start_offset: 0,
 end_offset: 3,
 type: ALPHANUM,
 position: 1
 },
 {
 token: check,
 start_offset: 5,
 end_offset: 10,
 type: ALPHANUM,
 position: 2
 },
 {
 token: out,
 start_offset: 11,
 end_offset: 14,
 type: ALPHANUM,
 position: 3
 },
 {
 token: 
 http://edge.org/conversation/yuval_noah_harari-daniel_kahneman-d;,
 start_offset: 15,
 end_offset: 79,
 type: URL,
 position: 4
 },
 {
 token: eath,
 start_offset: 79,
 end_offset: 83,
 type: ALPHANUM,
 position: 5
 },
 {
 token: is,
 start_offset: 84,
 end_offset: 86,
 type: ALPHANUM,
 position: 6
 },
 {
 token: optional,
 start_offset: 87,
 end_offset: 95,
 type: ALPHANUM,
 position: 7
 },
 {
 token: for,
 start_offset: 96,
 end_offset: 99,
 type: ALPHANUM,
 position: 8
 },
 {
 token: some,
 start_offset: 100,
 end_offset: 104,
 type: ALPHANUM,
 position: 9
 },
 {
 token: light,
 start_offset: 105,
 end_offset: 110,
 type: ALPHANUM,
 position: 10
 },
 {
 token: reading,
 start_offset: 111,
 end_offset: 118,
 type: ALPHANUM,
 position: 11
 }
 ]
 }
 {code}
 The term from the extracted URL is correct, and correctly truncated at 64 
 characters. But as you can see, the analysis pipeline also creates three 
 spurious terms [ eath, is optional ] which come from the discarded 
 portion of the URL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-06 Thread Areek Zillur (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351191#comment-14351191
 ] 

Areek Zillur commented on LUCENE-6339:
--

{quote}
you fetch the checksum for the dict file in {{ CompletionFieldsProducer#ctor }} 
via {{ CodecUtil.retrieveChecksum(dictIn); } but you ignore it's return value, 
was this intended? I think you don't wanna do that here? Did you intend to 
check the entire file?
I wonder if we should just write one file for both, the index and the FSTs? 
What's the benefit from having two?
{quote}
This was intentional, used the same convention for 
{{BlockTreeTermsReader#termsIn}} here. The thought was doing the checksum check 
would be very costly, in most cases the {{dict}} file would be large?
If we write one file instead of two, then the checksum check would be more 
expensive for the index then now?

 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch, LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return new 
 CompletionPostingsFormat(super.getPostingsFormatForField(field));
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 Hits are collected in descending order of the suggestion's weight 
 {code:java}
 // full options for TopSuggestDocs (TopDocs)
 TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
 // full options for Collector
 // note: only collects does not score
 void suggest(String field, CharSequence key, int maxNumPerLeaf, Filter 
 filter, Collector collector)
 {code}
 h4. Analyzer
 *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
 suggest field only parameters. 
 {code:java}
 CompletionAnalyzer completionAnalyzer = new CompletionAnalyzer(analyzer);
 completionAnalyzer.setPreserveSep(..)
 completionAnalyzer.setPreservePositionsIncrements(..)
 completionAnalyzer.setMaxGraphExpansions(..)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351053#comment-14351053
 ] 

Robert Muir commented on LUCENE-6328:
-

but it returns the iterator (subclass of DocIdSetIterator)... thats the only 
place where covariant override might work. Otherwise, DocIDSet just doesn't 
have a parallel there at all. its wedged in between Weight and Scorer and would 
be an added abstraction/level of indirection.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_31) - Build # 4527 - Still Failing!

2015-03-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4527/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:901)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:754)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:284)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Created] (SOLR-7202) Remove deprecated DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION Collection API actions

2015-03-06 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-7202:
---

 Summary: Remove deprecated DELETECOLLECTION, CREATECOLLECTION, 
RELOADCOLLECTION Collection API actions
 Key: SOLR-7202
 URL: https://issues.apache.org/jira/browse/SOLR-7202
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Varun Thacker
Priority: Minor
 Fix For: Trunk, 5.1


I think we can remove DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION 
action types.

It was marked as deprecated but didn't get removed in 5.0

While doing a quick check I saw that we can remove Overseer.REMOVECOLLECTION 
and Overseer.REMOVESHARD


Any reason why it should be a bad idea?





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6341.
-
Resolution: Fixed

I added a second test that turns on verbose, too

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7202) Remove deprecated DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION Collection API actions

2015-03-06 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350399#comment-14350399
 ] 

Erick Erickson commented on SOLR-7202:
--

This is just some internal constants, right? For a minute I thought you were 
talking making the actions away completely!

 Remove deprecated DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION 
 Collection API actions
 -

 Key: SOLR-7202
 URL: https://issues.apache.org/jira/browse/SOLR-7202
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Varun Thacker
Priority: Minor
 Fix For: Trunk, 5.1


 I think we can remove DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION 
 action types.
 It was marked as deprecated but didn't get removed in 5.0
 While doing a quick check I saw that we can remove Overseer.REMOVECOLLECTION 
 and Overseer.REMOVESHARD
 Any reason why it should be a bad idea?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6345) null check all term/fields in queries

2015-03-06 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6345:
---

 Summary: null check all term/fields in queries
 Key: LUCENE-6345
 URL: https://issues.apache.org/jira/browse/LUCENE-6345
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


See the mail thread is this lucene 4.1.0 bug in PerFieldPostingsFormat.

If anyone seriously thinks adding a null check to ctor will cause measurable 
slowdown to things like regexp or wildcards, they should have their head 
examined.

All queries should just check this crap in ctor and throw exceptions if 
parameters are invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350366#comment-14350366
 ] 

Robert Muir commented on LUCENE-6341:
-

This is related to my concerns about the option on 4.x segments where it does 
less or nothing at all...

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350386#comment-14350386
 ] 

ASF subversion and git services commented on LUCENE-6341:
-

Commit 1664633 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1664633 ]

LUCENE-6341: Add a -fast option to CheckIndex

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6342) add some missing sanity checks for old codecs

2015-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350393#comment-14350393
 ] 

ASF subversion and git services commented on LUCENE-6342:
-

Commit 1664637 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1664637 ]

LUCENE-6342: add some missing sanity checks for old codecs

 add some missing sanity checks for old codecs
 -

 Key: LUCENE-6342
 URL: https://issues.apache.org/jira/browse/LUCENE-6342
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.1

 Attachments: LUCENE-6341.patch


 We can beef up the FieldInfosReaders and the StoredFieldsReader a bit here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350361#comment-14350361
 ] 

Michael McCandless commented on LUCENE-6341:


OK I'm fine w/ leaving it as is.

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7203) NoHttpResponseException handling in HttpSolrClient is wrong

2015-03-06 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-7203:
---

 Summary: NoHttpResponseException handling in HttpSolrClient is 
wrong
 Key: SOLR-7203
 URL: https://issues.apache.org/jira/browse/SOLR-7203
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
Assignee: Alan Woodward


We've got logic in HttpSolrClient to catch NoHttpResponseException and retry.  
However, this logic appears to be in the wrong place - it's in the createMethod 
function, which doesn't actually execute any http requests at all.  It ought to 
be in executeMethod.

Fixing this might help sort out the persistent Jenkins failures as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350379#comment-14350379
 ] 

Robert Muir commented on LUCENE-6341:
-

I am so frustrated with the back compat, combined with -exorcise option, which 
traps us. 

I'll make the change for trunk only. 

5.x can have a fast checkindex when something gives.

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7203) NoHttpResponseException handling in HttpSolrClient is wrong

2015-03-06 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-7203:

Attachment: SOLR-7203.patch

Quick n dirty patch moving the retry logic.  Would be good to get some more 
eyes on this though.

 NoHttpResponseException handling in HttpSolrClient is wrong
 ---

 Key: SOLR-7203
 URL: https://issues.apache.org/jira/browse/SOLR-7203
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
Assignee: Alan Woodward
 Attachments: SOLR-7203.patch


 We've got logic in HttpSolrClient to catch NoHttpResponseException and retry. 
  However, this logic appears to be in the wrong place - it's in the 
 createMethod function, which doesn't actually execute any http requests at 
 all.  It ought to be in executeMethod.
 Fixing this might help sort out the persistent Jenkins failures as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6341:

Fix Version/s: Trunk

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6345) null check all term/fields in queries

2015-03-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350431#comment-14350431
 ] 

Michael McCandless commented on LUCENE-6345:


+1

 null check all term/fields in queries
 -

 Key: LUCENE-6345
 URL: https://issues.apache.org/jira/browse/LUCENE-6345
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 See the mail thread is this lucene 4.1.0 bug in PerFieldPostingsFormat.
 If anyone seriously thinks adding a null check to ctor will cause measurable 
 slowdown to things like regexp or wildcards, they should have their head 
 examined.
 All queries should just check this crap in ctor and throw exceptions if 
 parameters are invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7201) Implement multicore handling on HttpSolrClient

2015-03-06 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350461#comment-14350461
 ] 

Alan Woodward commented on SOLR-7201:
-

You can still create an HttpSolrClient pointing at a core, but core-specific 
queries won't work in that case.  I don't think we want to add 
setDefaultCollection here though.  The problem is, there's no way to know from 
just the passed-in URL string if we're pointing at the container app or at a 
specific core.

I'll add some JavaDoc to the class explaining the different use-cases:
* create an HttpSolrClient pointing to a specific core (can't do core admin 
requests or requests to another core)
* create an HttpSolrClient pointing to the container app (can do core admin 
requests, all core-specific requests should use the core-specific request 
methods)

 Implement multicore handling on HttpSolrClient
 --

 Key: SOLR-7201
 URL: https://issues.apache.org/jira/browse/SOLR-7201
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Attachments: SOLR-7201.patch


 Now that SOLR-7155 has added a collection parameter to the various SolrClient 
 methods, we can let HttpSolrClient use it to allow easier multicore handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350359#comment-14350359
 ] 

Robert Muir commented on LUCENE-6341:
-

My concerns are that it oversells it :)

The checks it does are all codec specific.

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6342) add some missing sanity checks for old codecs

2015-03-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6342.
-
   Resolution: Fixed
Fix Version/s: 5.1

 add some missing sanity checks for old codecs
 -

 Key: LUCENE-6342
 URL: https://issues.apache.org/jira/browse/LUCENE-6342
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.1

 Attachments: LUCENE-6341.patch


 We can beef up the FieldInfosReaders and the StoredFieldsReader a bit here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6344) Remove checkindex -exorcise

2015-03-06 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6344:
---

 Summary: Remove checkindex -exorcise
 Key: LUCENE-6344
 URL: https://issues.apache.org/jira/browse/LUCENE-6344
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


There is just no excuse to have such a horrible option here. If someone wants 
to delete their data, they can do it themselves. This is such a trap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7201) Implement multicore handling on HttpSolrClient

2015-03-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350454#comment-14350454
 ] 

Shawn Heisey commented on SOLR-7201:


With this change, would the URL to create HttpSolrClient objects change so that 
you would pass in the container app context instead of a core base URL?  If so, 
what happens if the object is created with a core URL?  Do we need 
setDefaultCollection in HttpSolrClient?


 Implement multicore handling on HttpSolrClient
 --

 Key: SOLR-7201
 URL: https://issues.apache.org/jira/browse/SOLR-7201
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Attachments: SOLR-7201.patch


 Now that SOLR-7155 has added a collection parameter to the various SolrClient 
 methods, we can let HttpSolrClient use it to allow easier multicore handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7155) Add an optional 'collection' parameter to all SolrClient methods

2015-03-06 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350143#comment-14350143
 ] 

Alan Woodward commented on SOLR-7155:
-

Shawn: see SOLR-7201

 Add an optional 'collection' parameter to all SolrClient methods
 

 Key: SOLR-7155
 URL: https://issues.apache.org/jira/browse/SOLR-7155
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Fix For: 5.1

 Attachments: SOLR-7155.patch


 As discussed on SOLR-7127.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1990 - Still Failing!

2015-03-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1990/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.util.TestUtils.testNanoTimeSpeed

Error Message:
Time taken for System.nanoTime is too high

Stack Trace:
java.lang.AssertionError: Time taken for System.nanoTime is too high
at 
__randomizedtesting.SeedInfo.seed([B0AE749D30928AF9:C65460AB3C3951A9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.util.TestUtils.testNanoTimeSpeed(TestUtils.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestUtils

Error Message:
41 threads leaked from SUITE scope at org.apache.solr.util.TestUtils: 1) 
Thread[id=5285, name=nanoTimeTestThread-2361-thread-8, state=TIMED_WAITING, 
group=TGRP-TestUtils] at sun.misc.Unsafe.park(Native Method)

[jira] [Created] (LUCENE-6343) Missing character in DefaultSimilarity's javadoc

2015-03-06 Thread JIRA
András Péteri created LUCENE-6343:
-

 Summary: Missing character in DefaultSimilarity's javadoc
 Key: LUCENE-6343
 URL: https://issues.apache.org/jira/browse/LUCENE-6343
 Project: Lucene - Core
  Issue Type: Bug
Reporter: András Péteri
Priority: Minor


The part which describes precision loss of norm values is missing a character; 
the encoded input value {{0.89}} in the example will actually be decoded to 
{{0.875}}, not {{0.75}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6343) Missing character in DefaultSimilarity's javadoc

2015-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

András Péteri updated LUCENE-6343:
--
Attachment: LUCENE-6343.patch

 Missing character in DefaultSimilarity's javadoc
 

 Key: LUCENE-6343
 URL: https://issues.apache.org/jira/browse/LUCENE-6343
 Project: Lucene - Core
  Issue Type: Bug
Reporter: András Péteri
Priority: Minor
 Attachments: LUCENE-6343.patch


 The part which describes precision loss of norm values is missing a 
 character; the encoded input value {{0.89}} in the example will actually be 
 decoded to {{0.875}}, not {{0.75}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-06 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350351#comment-14350351
 ] 

Simon Willnauer commented on LUCENE-6339:
-

Hey Areek, I agree with mike this looks awesome... lemme give you some comments

 * can we make {{CompletionAnalyzer}} immutable by any chance? I'd really like 
to not have setters if possible? For that I guess it's constants need to be 
public as well?
 * is {{private boolean isReservedInputCharacter(char c) }} needed since we 
then afterwards check it again in the {{checkKey}} method, maybe you just wanna 
use a switch here?
 * In {{CompletionFieldsConsumer#close()}} I think we need to make sure 
{{IOUtils.close(dictOut);}} is also called if an exception is hit?
 * do we need the extra {{InputStreamDataInput}} in 
{{CompletionTermWriter#parse}}, I mean it's a byte input stream so we should be 
able to read all of the bytes?
 * {{SuggestPayload}} doesn't need a default ctor
 * can we use {{ if (success == false) }} instead of {{ if (!success) }}  as a 
pattern in general?
 * use try / finally in {{CompletionFieldsProducer#close()}} to ensure all 
resource are closed or pass both the dict and {{ delegateFieldsProducer }} to 
IOUtils#close()?
 * you fetch the checksum for the dict file in {{ CompletionFieldsProducer#ctor 
}} via  {{ CodecUtil.retrieveChecksum(dictIn); } but you ignore it's return 
value, was this intended? I think you don't wanna do that here? Did you intend 
to check the entire file?
 * I wonder if we should just write one file for both, the index and the FSTs? 
What's the benefit from having two?

For loading the dict you put a comment in there sayingm {{ // is there a better 
way of doing this?}}

I think what you need to do is this:

{code}
public synchronized SegmentLookup lookup() throws IOException {
  if (lookup == null) {
 try (IndexInput dictClone = dictIn.clone()) { // let multiple fields load 
concurrently
 dictClone.seek(offset); // this is your field private clone 
 lookup = NRTSuggester.load(dictClone);
 }
  }
  return lookup;
}
{code}

I'd appreciate a tests that this works just fine ie. loading multiple FSTs 
concurrently.

I didn't get further than this due to the lack of time but I will come back to 
this either today or tomorrow. Good stuff Areek

 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return new 
 CompletionPostingsFormat(super.getPostingsFormatForField(field));
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 

[jira] [Commented] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350350#comment-14350350
 ] 

Michael McCandless commented on LUCENE-6341:


bq.  Reader open is doing this check.

Right, I just mean the usage output (-fast: just verify file checksums, 
omitting logical integrity checks) is selling this option short because we do 
more than just verify file checksums.  Maybe it could say something like only 
perform fast verification such as file checksums, segment ids are consistent, 
etc.?

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7187) SolrCloud does not fully clean collection after delete

2015-03-06 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350353#comment-14350353
 ] 

Mike Drob commented on SOLR-7187:
-

The data dir is deleted by each shard individually when instructed to unload. 
This is good and makes sense.

I'm trying to compare the cloud implementation in 
{{CollectionsHandler.handleDeleteAction}} with a non-cloud implementation, but 
I'm having trouble finding it. I do have a unit test that shows the same 
behavior on a non-hdfs SolrCloud, though.

 SolrCloud does not fully clean collection after delete
 --

 Key: SOLR-7187
 URL: https://issues.apache.org/jira/browse/SOLR-7187
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Mike Drob
 Attachments: log.out.gz


 When I attempt to delete a collection using 
 {{/admin/collections?action=DELETEname=collection1}} if I go into HDFS I 
 will still see remnants from the collection. No files, but empty directories 
 stick around.
 {noformat}
 [root@solr1 ~]# sudo -u hdfs hdfs dfs -ls -R /solr/collection1
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node1
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node2
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node3
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node4
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node5
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node6
 {noformat}
 (Edit: I had the wrong log portion here originally)
 In the logs, after deleting all the data, I see:
 {noformat}
 2015-03-03 16:15:14,762 INFO org.apache.solr.servlet.SolrDispatchFilter: 
 [admin] webapp=null path=/admin/cores 
 params={deleteInstanceDir=trueaction=UNLOADcore=collection1_shard5_replica1wt=javabinqt=/admin/coresdeleteDataDir=trueversion=2}
  status=0 QTime=362 
 2015-03-03 16:15:14,787 INFO org.apache.solr.common.cloud.ZkStateReader: A 
 cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
 2015-03-03 16:15:14,854 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,879 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,896 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,920 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,151 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,170 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,279 INFO org.apache.solr.common.cloud.ZkStateReader: A 
 cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
 2015-03-03 16:15:15,546 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: 
 /overseer/collection-queue-work/qnr-16 state: SyncConnected type 
 NodeDataChanged
 2015-03-03 16:15:15,562 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/collection-queue-work state: 
 SyncConnected type NodeChildrenChanged
 2015-03-03 16:15:15,562 INFO 
 org.apache.solr.cloud.OverseerCollectionProcessor: Overseer Collection 
 Processor: Message id:/overseer/collection-queue-work/qn-16 complete, 
 response:{success={solr1.example.com:8983_solr={responseHeader={status=0,QTime=207}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=243}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=243}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=342}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=346}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=362
 {noformat}
 This might be related to SOLR-5023, but I'm not sure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7197) Can solr EntityProcessor implement curosrs

2015-03-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350288#comment-14350288
 ] 

Shalin Shekhar Mangar commented on SOLR-7197:
-

Hi Raveendra, a patch would be great. Please see 
http://wiki.apache.org/solr/HowToContribute on how to create a patch for 
inclusion. The patch should be against the trunk branch which will then be 
backported to 5.x branch once ready.

 Can solr EntityProcessor implement curosrs
 --

 Key: SOLR-7197
 URL: https://issues.apache.org/jira/browse/SOLR-7197
 Project: Solr
  Issue Type: Wish
  Components: contrib - DataImportHandler
Affects Versions: 5.0
 Environment: Prod
Reporter: Raveendra Yerraguntl

 package org.apache.solr.handler.dataimport;
 class SolrEntityProcessor
  protected SolrDocumentList doQuery(int start) {
  
  SolrQuery solrQuery = new SolrQuery(queryString);
 solrQuery.setRows(rows);
 solrQuery.setStart(start);
 if (fields != null) {
   for (String field : fields) {
 solrQuery.addField(field);
   }
 }
 solrQuery.setRequestHandler(requestHandler);
 solrQuery.setFilterQueries(filterQueries);
 solrQuery.setTimeAllowed(timeout * 1000);
 
 QueryResponse response = null;
 try {
   response = solrClient.query(solrQuery);
 } catch (SolrServerException e) {
   if (ABORT.equals(onError)) {
 wrapAndThrow(SEVERE, e);
   } else if (SKIP.equals(onError)) {
 wrapAndThrow(DataImportHandlerException.SKIP_ROW, e);
   }
 }
 ---
 If the do Query variant can be implemented with cursor, then it helps with 
 any heavy lifting (bulk processing) with entity processor. That really helps.
 If permitted I can contribute the fix. Currently I am using 4.10 and see the 
 performance issues and planning the work around. If the cursor is available 
 then it really helps. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2725 - Still Failing

2015-03-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2725/

6 tests failed.
REGRESSION:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:63138/tu_s/q/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:63138/tu_s/q/collection1
at 
__randomizedtesting.SeedInfo.seed([EA569BC7938C0641:6202A41D3D706BB9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:565)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:211)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:556)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:604)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:565)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Commented] (SOLR-7187) SolrCloud does not fully clean collection after delete

2015-03-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350503#comment-14350503
 ] 

Shalin Shekhar Mangar commented on SOLR-7187:
-

bq. I'm trying to compare the cloud implementation in 
CollectionsHandler.handleDeleteAction with a non-cloud implementation, but I'm 
having trouble finding it.

A non-cloud implementation would just call CoreAdminHandler.unload directly. 
The CollectionHandler.handleDeleteCollection eventually calls (via a Zookeeper 
queue) OverseerCollectionProcessor.deleteCollection which then calls 
CoreAdminHandler.unload.

 SolrCloud does not fully clean collection after delete
 --

 Key: SOLR-7187
 URL: https://issues.apache.org/jira/browse/SOLR-7187
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Mike Drob
 Attachments: log.out.gz


 When I attempt to delete a collection using 
 {{/admin/collections?action=DELETEname=collection1}} if I go into HDFS I 
 will still see remnants from the collection. No files, but empty directories 
 stick around.
 {noformat}
 [root@solr1 ~]# sudo -u hdfs hdfs dfs -ls -R /solr/collection1
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node1
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node2
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node3
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node4
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node5
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node6
 {noformat}
 (Edit: I had the wrong log portion here originally)
 In the logs, after deleting all the data, I see:
 {noformat}
 2015-03-03 16:15:14,762 INFO org.apache.solr.servlet.SolrDispatchFilter: 
 [admin] webapp=null path=/admin/cores 
 params={deleteInstanceDir=trueaction=UNLOADcore=collection1_shard5_replica1wt=javabinqt=/admin/coresdeleteDataDir=trueversion=2}
  status=0 QTime=362 
 2015-03-03 16:15:14,787 INFO org.apache.solr.common.cloud.ZkStateReader: A 
 cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
 2015-03-03 16:15:14,854 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,879 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,896 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,920 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,151 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,170 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,279 INFO org.apache.solr.common.cloud.ZkStateReader: A 
 cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
 2015-03-03 16:15:15,546 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: 
 /overseer/collection-queue-work/qnr-16 state: SyncConnected type 
 NodeDataChanged
 2015-03-03 16:15:15,562 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/collection-queue-work state: 
 SyncConnected type NodeChildrenChanged
 2015-03-03 16:15:15,562 INFO 
 org.apache.solr.cloud.OverseerCollectionProcessor: Overseer Collection 
 Processor: Message id:/overseer/collection-queue-work/qn-16 complete, 
 response:{success={solr1.example.com:8983_solr={responseHeader={status=0,QTime=207}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=243}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=243}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=342}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=346}},solr1.example.com:8983_solr={responseHeader={status=0,QTime=362
 {noformat}
 This might be related to SOLR-5023, but I'm not sure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[jira] [Commented] (SOLR-7201) Implement multicore handling on HttpSolrClient

2015-03-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350518#comment-14350518
 ] 

Shawn Heisey commented on SOLR-7201:


bq. And forcing people to add a null parameter to their code all over the place 
is just ugly.

I agree with this, but I think the suggested forward migration path should be a 
situation where only the new methods are used, and every call will include a 
non-null string for collection.  Leaving out setDefaultCollection would 
encourage this as well, so that sounds like a good plan.  I think perhaps we 
should deprecate setDefaultCollection in CloudSolrClient as well, since it is 
not threadsafe and new functionality removes the need for it.

Deprecation gives the developer a instant clue that they are not using the 
class in the way that the authors intended.  We can describe the design intent 
in the javadoc ... but deprecation gives an IDE user an immediate indicator 
that they should read that javadoc and change their code.

I'm excited about this new functionality.  I'll be able to change code that 
currently creates sixty HttpSolrServer objects (56 for talking to individual 
cores, four for CoreAdminRequest functionality to the servers) so it only 
creates four HttpSolrClient objects.


 Implement multicore handling on HttpSolrClient
 --

 Key: SOLR-7201
 URL: https://issues.apache.org/jira/browse/SOLR-7201
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Attachments: SOLR-7201.patch


 Now that SOLR-7155 has added a collection parameter to the various SolrClient 
 methods, we can let HttpSolrClient use it to allow easier multicore handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7175) optimize maxSegments=2/ results in more than 2 segments after optimize finishes

2015-03-06 Thread Tom Burton-West (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350575#comment-14350575
 ] 

Tom Burton-West commented on SOLR-7175:
---

Hi Mike,

Our code is supposed to completely finish indexing before then calling a commit 
and optimize.
I was trying to figure out how indexed documents could be in RAM after we 
called a commit and the resulting flush finished. Indexing should have 
completed prior to our code calling a commit and then optimize (ie. force 
merge).  We will double check our code and of course if we find a bug in the 
code we'll fix the bug, test, and  close the issue.   The reason we suspected 
something on the Solr4/Lucene4 end is that we haven't made any changes to the 
indexing/optimizing code in quite a while and we were not seeing this issue 
with Solr 4.6.



 optimize maxSegments=2/ results in more than 2 segments after optimize 
 finishes
 ---

 Key: SOLR-7175
 URL: https://issues.apache.org/jira/browse/SOLR-7175
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
 Environment: linux
Reporter: Tom Burton-West
Priority: Minor
 Attachments: build-1.indexwriterlog.2015-02-23.gz, 
 build-4.iw.2015-02-25.txt.gz, solr4.shotz


 After finishing indexing and running a commit, we issue an optimize 
 maxSegments=2/ to Solr.  With Solr 4.10.2 we are seeing one or two shards 
 (out of 12) with 3 or 4 segments after the optimize finishes.  There are no 
 errors in the Solr logs or indexwriter logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7187) SolrCloud does not fully clean collection after delete

2015-03-06 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350586#comment-14350586
 ] 

Mike Drob commented on SOLR-7187:
-

Yea, I traced the queues through from delete to unload. Based on your comments 
and running a few sample tests, it sounds like the difference in implementation 
is architectural, which gives me a little confusion to how we name things.

Data dir in HDFS: 
{{hdfs://localhost:44036/solr/delete_data_dir/core_node2/data}}
Instance dir in HDFS: 
{{/tmp/solr.cloud.hdfs.StressHdfsTest-C14CE921F29EF7F3-001/tempDir-005/delete_data_dir_shard2_replica1}}

Data dir in non-HDFS: 
{{/tmp/solr.cloud.CreateDeleteCollectionTest-AD37514288D15339-001/tempDir-003/delete_data_dir_shard1_replica2/data/}}
Instance dir in non-HDFS: 
{{/tmp/solr.cloud.CreateDeleteCollectionTest-AD37514288D15339-001/tempDir-003/delete_data_dir_shard1_replica2}}

When we delete the instance dir, we are always looking at a local directory. I 
could wire up a patch to delete {{dataDir.getParent()}} when deleting the data 
direcotry if we are using HDFS, but that seems fragile. Maybe it makes the most 
sense to delete the entire collection dir as a post step, if we determine that 
we're on HDFS. My impression is that there is no common collection-wide local 
directory for non-HDFS use cases, even when multiple cores are hosted on the 
same server, which is why this wasn't seen outside of HDFS.

Is the {{tempDir-003}} part of the path a meaningful directory level or just an 
artifact of JUnit structuring; i.e. should we be worrying about deleting it 
when we delete a collection (we currently do not)?

 SolrCloud does not fully clean collection after delete
 --

 Key: SOLR-7187
 URL: https://issues.apache.org/jira/browse/SOLR-7187
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Mike Drob
 Attachments: log.out.gz


 When I attempt to delete a collection using 
 {{/admin/collections?action=DELETEname=collection1}} if I go into HDFS I 
 will still see remnants from the collection. No files, but empty directories 
 stick around.
 {noformat}
 [root@solr1 ~]# sudo -u hdfs hdfs dfs -ls -R /solr/collection1
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node1
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node2
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node3
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node4
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node5
 drwxr-xr-x   - solr solr  0 2015-03-03 15:42 
 /solr/collection1/core_node6
 {noformat}
 (Edit: I had the wrong log portion here originally)
 In the logs, after deleting all the data, I see:
 {noformat}
 2015-03-03 16:15:14,762 INFO org.apache.solr.servlet.SolrDispatchFilter: 
 [admin] webapp=null path=/admin/cores 
 params={deleteInstanceDir=trueaction=UNLOADcore=collection1_shard5_replica1wt=javabinqt=/admin/coresdeleteDataDir=trueversion=2}
  status=0 QTime=362 
 2015-03-03 16:15:14,787 INFO org.apache.solr.common.cloud.ZkStateReader: A 
 cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
 2015-03-03 16:15:14,854 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,879 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,896 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:14,920 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,151 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,170 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type 
 NodeChildrenChanged
 2015-03-03 16:15:15,279 INFO org.apache.solr.common.cloud.ZkStateReader: A 
 cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
 2015-03-03 16:15:15,546 INFO org.apache.solr.cloud.DistributedQueue: 
 LatchChildWatcher fired on path: 
 /overseer/collection-queue-work/qnr-16 state: SyncConnected type 
 NodeDataChanged
 2015-03-03 

[jira] [Commented] (SOLR-7201) Implement multicore handling on HttpSolrClient

2015-03-06 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350501#comment-14350501
 ] 

Alan Woodward commented on SOLR-7201:
-

bq. I've been wondering whether the old methods should be deprecated

I wouldn't have thought so, they still work fine for lots of situations.  And 
forcing people to add a null parameter to their code all over the place is just 
ugly.

 Implement multicore handling on HttpSolrClient
 --

 Key: SOLR-7201
 URL: https://issues.apache.org/jira/browse/SOLR-7201
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Attachments: SOLR-7201.patch


 Now that SOLR-7155 has added a collection parameter to the various SolrClient 
 methods, we can let HttpSolrClient use it to allow easier multicore handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6346) always initCause() ParseExceptions from Version.java

2015-03-06 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6346:
---

 Summary: always initCause() ParseExceptions from Version.java
 Key: LUCENE-6346
 URL: https://issues.apache.org/jira/browse/LUCENE-6346
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


This is only done some of the time, we should do it always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7201) Implement multicore handling on HttpSolrClient

2015-03-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350489#comment-14350489
 ] 

Shawn Heisey commented on SOLR-7201:


Thanks for the update.

I've been wondering whether the old methods should be deprecated.  It's clear 
that they can still be useful in some circumstances, but I think perhaps we 
should encourage people to use the new methods with null for the collection 
in those circumstances ... and explain everything as clearly as we can in the 
javadoc.


 Implement multicore handling on HttpSolrClient
 --

 Key: SOLR-7201
 URL: https://issues.apache.org/jira/browse/SOLR-7201
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Attachments: SOLR-7201.patch


 Now that SOLR-7155 has added a collection parameter to the various SolrClient 
 methods, we can let HttpSolrClient use it to allow easier multicore handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 778 - Still Failing

2015-03-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/778/

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
ERROR: SolrIndexSearcher opens=30 closes=29

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=30 closes=29
at __randomizedtesting.SeedInfo.seed([5B237F1A3DCE8604]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:494)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=84, 
name=qtp805310106-84, state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest]   
  at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient.blockUntilFinished(ConcurrentUpdateSolrClient.java:389)
 at 
org.apache.solr.update.StreamingSolrClients.blockUntilFinished(StreamingSolrClients.java:103)
 at 
org.apache.solr.update.SolrCmdDistributor.blockAndDoRetries(SolrCmdDistributor.java:229)
 at 
org.apache.solr.update.SolrCmdDistributor.finish(SolrCmdDistributor.java:89)
 at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:780)
 at 
org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1630)
 at 
org.apache.solr.update.processor.LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:183)
 at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:83)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2026) 
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:808) 
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:435)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:102)
 at 

[jira] [Commented] (SOLR-7202) Remove deprecated DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION Collection API actions

2015-03-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350486#comment-14350486
 ] 

Shalin Shekhar Mangar commented on SOLR-7202:
-

+1

 Remove deprecated DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION 
 Collection API actions
 -

 Key: SOLR-7202
 URL: https://issues.apache.org/jira/browse/SOLR-7202
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Varun Thacker
Priority: Minor
 Fix For: Trunk, 5.1


 I think we can remove DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION 
 action types.
 It was marked as deprecated but didn't get removed in 5.0
 While doing a quick check I saw that we can remove Overseer.REMOVECOLLECTION 
 and Overseer.REMOVESHARD
 Any reason why it should be a bad idea?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_31) - Build # 4419 - Still Failing!

2015-03-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4419/
Java: 64bit/jdk1.8.0_31 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:901)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:754)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:284)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350561#comment-14350561
 ] 

Paul Elschot commented on LUCENE-6328:
--

I am probably late with this, but anyway. Using a stricter result type 
(subclass of the result type) in a subclass method was not available in Java at 
the time LUCENE-1518, but it is now.

That means that there is an alternative that was not available at the time of 
LUCENE-1518, and that is to make Weight a subclass of Filter. This involves 
using a subclass result type for the Weight.scorer method (returning Scorer) 
and than for the (meanwhile old) Filter.docIdSet.iterator method (returning 
DISI). These methods should be merged into one in that case.

I have not tried this, but my guess is that this approach will simplify the 
code in many places.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350574#comment-14350574
 ] 

Paul Elschot commented on LUCENE-6328:
--

Actually that should be: ... make Weight a subclass of DocIdSet.  This involves 
using the same method in Weight to return a Scorer as the method that is/was 
used in DocIdSet to return a DISI.

With some luck this can then be repeated for Query and Filter: make Query a 
subclass of Filter. This involves using the same method in Query to return a 
Weight as the method that is/was used in Filter to return a DocIdSet.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350561#comment-14350561
 ] 

Paul Elschot edited comment on LUCENE-6328 at 3/6/15 4:57 PM:
--

I am probably late with this, but anyway. Using a stricter result type 
(subclass of the result type) in a subclass method was not available in Java at 
the time of LUCENE-1518, but it is now.

That means that there is an alternative that was not available at the time of 
LUCENE-1518. [corrected, see next post].

I have not tried this, but my guess is that this approach will simplify the 
code in many places.


was (Author: paul.elsc...@xs4all.nl):
I am probably late with this, but anyway. Using a stricter result type 
(subclass of the result type) in a subclass method was not available in Java at 
the time of LUCENE-1518, but it is now.

That means that there is an alternative that was not available at the time of 
LUCENE-1518, and that is to make Weight a subclass of Filter. This involves 
using the same method in Weight to return a Scorer as the method that is/was 
used in Filter to return a DISI.

I have not tried this, but my guess is that this approach will simplify the 
code in many places.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7175) optimize maxSegments=2/ results in more than 2 segments after optimize finishes

2015-03-06 Thread Tom Burton-West (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350575#comment-14350575
 ] 

Tom Burton-West edited comment on SOLR-7175 at 3/6/15 4:56 PM:
---

Hi Mike,

Our code is supposed to completely finish indexing before then calling a commit 
and optimize.
I was trying to figure out how indexed documents could be in RAM after we 
called a commit and the resulting flush finished. Indexing should have 
completed prior to our code calling a commit and then optimize (ie. force 
merge).  We will double check our code and of course if we find a bug in the 
code we'll fix the bug, test, and  close the issue.   The reason we suspected 
something on the Solr4/Lucene4 end is that we haven't made any changes to the 
indexing/optimizing code in quite a while and we were not seeing this issue 
with Solr 3.6.




was (Author: tburtonwest):
Hi Mike,

Our code is supposed to completely finish indexing before then calling a commit 
and optimize.
I was trying to figure out how indexed documents could be in RAM after we 
called a commit and the resulting flush finished. Indexing should have 
completed prior to our code calling a commit and then optimize (ie. force 
merge).  We will double check our code and of course if we find a bug in the 
code we'll fix the bug, test, and  close the issue.   The reason we suspected 
something on the Solr4/Lucene4 end is that we haven't made any changes to the 
indexing/optimizing code in quite a while and we were not seeing this issue 
with Solr 4.6.



 optimize maxSegments=2/ results in more than 2 segments after optimize 
 finishes
 ---

 Key: SOLR-7175
 URL: https://issues.apache.org/jira/browse/SOLR-7175
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
 Environment: linux
Reporter: Tom Burton-West
Priority: Minor
 Attachments: build-1.indexwriterlog.2015-02-23.gz, 
 build-4.iw.2015-02-25.txt.gz, solr4.shotz


 After finishing indexing and running a commit, we issue an optimize 
 maxSegments=2/ to Solr.  With Solr 4.10.2 we are seeing one or two shards 
 (out of 12) with 3 or 4 segments after the optimize finishes.  There are no 
 errors in the Solr logs or indexwriter logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7201) Implement multicore handling on HttpSolrClient

2015-03-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350539#comment-14350539
 ] 

Shawn Heisey commented on SOLR-7201:


Further thoughts:  If we pursue deprecation, then the trunk code should 
probably throw an IllegalArgumentException when collection is null.


 Implement multicore handling on HttpSolrClient
 --

 Key: SOLR-7201
 URL: https://issues.apache.org/jira/browse/SOLR-7201
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Attachments: SOLR-7201.patch


 Now that SOLR-7155 has added a collection parameter to the various SolrClient 
 methods, we can let HttpSolrClient use it to allow easier multicore handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350561#comment-14350561
 ] 

Paul Elschot edited comment on LUCENE-6328 at 3/6/15 4:46 PM:
--

I am probably late with this, but anyway. Using a stricter result type 
(subclass of the result type) in a subclass method was not available in Java at 
the time of LUCENE-1518, but it is now.

That means that there is an alternative that was not available at the time of 
LUCENE-1518, and that is to make Weight a subclass of Filter. This involves 
using the same method in Weight to return a Scorer as the method that is used 
in Filter to return a DISI.

I have not tried this, but my guess is that this approach will simplify the 
code in many places.


was (Author: paul.elsc...@xs4all.nl):
I am probably late with this, but anyway. Using a stricter result type 
(subclass of the result type) in a subclass method was not available in Java at 
the time LUCENE-1518, but it is now.

That means that there is an alternative that was not available at the time of 
LUCENE-1518, and that is to make Weight a subclass of Filter. This involves 
using a subclass result type for the Weight.scorer method (returning Scorer) 
and than for the (meanwhile old) Filter.docIdSet.iterator method (returning 
DISI). These methods should be merged into one in that case.

I have not tried this, but my guess is that this approach will simplify the 
code in many places.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent

2015-03-06 Thread Jennifer Stumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350571#comment-14350571
 ] 

Jennifer Stumpf commented on SOLR-6350:
---

We have a tremendous need for percentiles, especially for 5.0.  We were 
formally using the patch from 3583 for 4.10.3, but we are getting ready to 
deploy with 5.0.
Is there a plan for continuing with this improvement?  Have you seen 
performance issues or just anticipate them (WRT the compression parameter)?

 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-xu.patch, SOLR-6350-xu.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350561#comment-14350561
 ] 

Paul Elschot edited comment on LUCENE-6328 at 3/6/15 4:49 PM:
--

I am probably late with this, but anyway. Using a stricter result type 
(subclass of the result type) in a subclass method was not available in Java at 
the time of LUCENE-1518, but it is now.

That means that there is an alternative that was not available at the time of 
LUCENE-1518, and that is to make Weight a subclass of Filter. This involves 
using the same method in Weight to return a Scorer as the method that is/was 
used in Filter to return a DISI.

I have not tried this, but my guess is that this approach will simplify the 
code in many places.


was (Author: paul.elsc...@xs4all.nl):
I am probably late with this, but anyway. Using a stricter result type 
(subclass of the result type) in a subclass method was not available in Java at 
the time of LUCENE-1518, but it is now.

That means that there is an alternative that was not available at the time of 
LUCENE-1518, and that is to make Weight a subclass of Filter. This involves 
using the same method in Weight to return a Scorer as the method that is used 
in Filter to return a DISI.

I have not tried this, but my guess is that this approach will simplify the 
code in many places.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6346) always initCause() ParseExceptions from Version.java

2015-03-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6346:

Attachment: LUCENE-6346.patch

 always initCause() ParseExceptions from Version.java
 

 Key: LUCENE-6346
 URL: https://issues.apache.org/jira/browse/LUCENE-6346
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6346.patch


 This is only done some of the time, we should do it always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: No need to call isIndexStale if full cop...

2015-03-06 Thread stephlag
GitHub user stephlag opened a pull request:

https://github.com/apache/lucene-solr/pull/131

No need to call isIndexStale if full copy is already needed

This will avoid having the message File _3ww7_Lucene41_0.tim expected to 
be 2027667 while it is 1861076 when in fact there was already a match on 
commit.getGeneration() = latestGeneration

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/stephlag/lucene-solr patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/131.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #131


commit 1409f4ed7827e155677a2933801e1d491f2d72fa
Author: Stephan L stefa...@yahoo.fr
Date:   2015-03-06T16:39:38Z

No need to call isIndexStale if full copy is already needed

This will avoid having the message File _3ww7_Lucene41_0.tim expected to 
be 2027667 while it is 1861076 when in fact there was already a match on 
commit.getGeneration() = latestGeneration




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4942) Indexed non-point shapes index excessive terms

2015-03-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350585#comment-14350585
 ] 

David Smiley commented on LUCENE-4942:
--

I'd like to solve this issue (excessive terms) and _also_ address 
differentiating between fully-contained leaves vs approximated leaves (for 
LUCENE-5776) in one go tracked on this issue to avoid dealing with back-compat 
more than once.  That is, just once we change how PrefixTree derivative 
strategies encode the term data, instead of doing over more than one issue.  
And I'm thinking on trunk wouldn't worry about the back-compat (it is trunk 
after all), and then the port to 5x would have to consider it -- the down-side 
being some spatial code on trunk vs 5x may vary a bit.  Perhaps the back-compat 
detection in 5x would work via a check for Version similar to Analyzer's having 
a version property that can optionally be set.

I'm not sure how contentious it may be to simply forgo back-compat.  _Just_ 
re-index.  And you're not affected if all you have is point data, which seems 
to be at least 80% of the users using spatial.

 Indexed non-point shapes index excessive terms
 --

 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley

 Indexed non-point shapes are comprised of a set of terms that represent grid 
 cells.  Cells completely within the shape or cells on the intersecting edge 
 that are at the maximum detail depth being indexed for the shape are denoted 
 as leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens 
 are actually indexed twice_, one with the leaf byte and one without.
 The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
 cells and so the tokens with '+' are completely redundant.
 The Recursive [algorithm] based PrefixTree Strategy better supports correct 
 search of indexed non-point shapes than TermQuery does and the distinction is 
 relevant.  However, the foundational search algorithms used by this strategy 
 (Intersects  Contains; the other 2 are based on these) could each be 
 upgraded to deal with this correctly.  Not trivial but very doable.
 In the end, spatial non-point indexes can probably be trimmed my ~40% by 
 doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2724 - Still Failing

2015-03-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2724/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51680/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51680/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([128A01BB586DC130:9ADE3E61F691ACC8]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:597)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:920)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:811)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:754)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350155#comment-14350155
 ] 

Michael McCandless commented on LUCENE-6341:


This is awesome.

+1 for patch and 5.x.

Does -verbose work with -fast?  I think it should (we seem to do null checks 
for all the terms dict stats), maybe add a test?  It's a nice (fast0 way to see 
RAM usage for the index...

Does -exorcise and -fast work?

In the usage can you also confess that the identifiers are also cross-checked?

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7201) Implement multicore handling on HttpSolrClient

2015-03-06 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-7201:

Attachment: SOLR-7201.patch

Patch.

 Implement multicore handling on HttpSolrClient
 --

 Key: SOLR-7201
 URL: https://issues.apache.org/jira/browse/SOLR-7201
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Attachments: SOLR-7201.patch


 Now that SOLR-7155 has added a collection parameter to the various SolrClient 
 methods, we can let HttpSolrClient use it to allow easier multicore handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7201) Implement multicore handling on HttpSolrClient

2015-03-06 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-7201:
---

 Summary: Implement multicore handling on HttpSolrClient
 Key: SOLR-7201
 URL: https://issues.apache.org/jira/browse/SOLR-7201
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor


Now that SOLR-7155 has added a collection parameter to the various SolrClient 
methods, we can let HttpSolrClient use it to allow easier multicore handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6341) add CheckIndex -fast option

2015-03-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350269#comment-14350269
 ] 

Robert Muir commented on LUCENE-6341:
-

bq. Does -verbose work with -fast?

yes

bq. Does -exorcise and -fast work?

yes

bq. In the usage can you also confess that the identifiers are also 
cross-checked?

 Reader open is doing this check.

 add CheckIndex -fast option
 ---

 Key: LUCENE-6341
 URL: https://issues.apache.org/jira/browse/LUCENE-6341
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6341.patch


 CheckIndex is great for testing and when tracking down lucene bugs. 
 But in cases where users just want to verify their index files are OK, it is 
 very slow and expensive.
 I think we should add a -fast option, that only opens the reader and calls 
 checkIntegrity(). This means all files are the correct files (identifiers 
 match) and have the correct CRC32 checksums.
 For our 10M doc wikipedia index, this is the difference between a 2 second 
 check and a 2 minute check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6346) always initCause() ParseExceptions from Version.java

2015-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350630#comment-14350630
 ] 

ASF subversion and git services commented on LUCENE-6346:
-

Commit 1664683 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1664683 ]

LUCENE-6346: always initCause() ParseExceptions from Version.java

 always initCause() ParseExceptions from Version.java
 

 Key: LUCENE-6346
 URL: https://issues.apache.org/jira/browse/LUCENE-6346
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6346.patch


 This is only done some of the time, we should do it always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6347) MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using regexpression syntax unwittingly)

2015-03-06 Thread Paul taylor (JIRA)
Paul taylor created LUCENE-6347:
---

 Summary: MultiFieldQueryParser doesnt catch invalid syntax 
properly (due to user using regexpression syntax unwittingly)
 Key: LUCENE-6347
 URL: https://issues.apache.org/jira/browse/LUCENE-6347
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1
Reporter: Paul taylor


MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
regexpression syntax unwittingly)
 
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.queryparser.classic.MultiFieldQueryParser;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.util.Version;
import org.junit.Test;

import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;

/**
 * Lucene tests
 */
public class LuceneRegExParseTest
{

@Test
public void testSearch411LuceneBugReport() throws Exception
{
Exception e = null;
try
{
String[] fields = new String[2];
fields[0] = artist;
fields[1] = recording;

QueryParser qp = new MultiFieldQueryParser(Version.LUCENE_41, 
fields, new StandardAnalyzer(Version.LUCENE_41));
qp.parse(artist:pandora /reyli  recording:yo/Alguien);
}
catch(Exception ex)
{
e=ex;
}
assertNotNull(e);
assertTrue(e instanceof ParseException );
}
}

With assertions disabled this test fails as no exception is thrown.
With assertions enabled we get

java.lang.AssertionError
at 
org.apache.lucene.search.MultiTermQuery.init(MultiTermQuery.java:252)
at 
org.apache.lucene.search.AutomatonQuery.init(AutomatonQuery.java:65)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:90)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:69)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.newRegexpQuery(QueryParserBase.java:790)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.getRegexpQuery(QueryParserBase.java:1005)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:1075)
at 
org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:359)
at 
org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:258)
at 
org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:213)
at 
org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:171)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:120)
at 
org.musicbrainz.search.servlet.LuceneRegExParseTest.testSearch411LuceneBugReport(LuceneRegExParseTest.java:30)

but this should throw an exception without assertions enabled. Because no 
exception is thrown a search then faikls with the following stack trace

java.lang.NullPointerException
at java.util.TreeMap.getEntry(TreeMap.java:342)
at java.util.TreeMap.get(TreeMap.java:273)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:215)
at 
org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:58)
at 
org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
at 
org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
at org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:286)
at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:429)
at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:616)
at 
org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:663)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7204) Improve error handling in create collection API

2015-03-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350710#comment-14350710
 ] 

Mark Miller commented on SOLR-7204:
---

Looks like the problem is in HdfsLock:

{noformat}
} catch (IOException e) {
  log.error(Error creating lock file, e);
  return false;
}
{noformat}

We should look at either just throwing an exception here or somehow relaying 
out the IOException as the root cause in some manner rather than just logging 
it and returning we couldn't get the lock.

 Improve error handling in create collection API
 ---

 Key: SOLR-7204
 URL: https://issues.apache.org/jira/browse/SOLR-7204
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Hrishikesh Gadre
Priority: Minor

 I was trying to create a collection on a Solrcloud deployed along with 
 kerberized Hadoop cluster. I kept on getting following error,
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
 CREATEing SolrCore 'orders_shard1_replica2': Unable to create core 
 [orders_shard1_replica2] Caused by: Lock obtain timed out: 
 org.apache.solr.store.hdfs.HdfsLockFactory$HdfsLock@451997e1
 On careful analysis of logs, I realized it was due to Solr not being able to 
 talk to HDFS properly because of following error,
 javax.net.ssl.SSLHandshakeException: 
 sun.security.validator.ValidatorException: PKIX path building failed: 
 sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
 valid certification path to requested target
 We should improve the error handling such that we return the root-cause of 
 the error (in this case SSLHandshakeException instead of lock timeout 
 exception).
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6346) always initCause() ParseExceptions from Version.java

2015-03-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350609#comment-14350609
 ] 

Michael McCandless commented on LUCENE-6346:


+1

 always initCause() ParseExceptions from Version.java
 

 Key: LUCENE-6346
 URL: https://issues.apache.org/jira/browse/LUCENE-6346
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6346.patch


 This is only done some of the time, we should do it always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6343) Missing character in DefaultSimilarity's javadoc

2015-03-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6343.

   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

Thanks András!

 Missing character in DefaultSimilarity's javadoc
 

 Key: LUCENE-6343
 URL: https://issues.apache.org/jira/browse/LUCENE-6343
 Project: Lucene - Core
  Issue Type: Bug
Reporter: András Péteri
Assignee: Michael McCandless
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6343.patch


 The part which describes precision loss of norm values is missing a 
 character; the encoded input value {{0.89}} in the example will actually be 
 decoded to {{0.875}}, not {{0.75}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6346) always initCause() ParseExceptions from Version.java

2015-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350637#comment-14350637
 ] 

ASF subversion and git services commented on LUCENE-6346:
-

Commit 1664686 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1664686 ]

LUCENE-6346: always initCause() ParseExceptions from Version.java

 always initCause() ParseExceptions from Version.java
 

 Key: LUCENE-6346
 URL: https://issues.apache.org/jira/browse/LUCENE-6346
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6346.patch


 This is only done some of the time, we should do it always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7203) NoHttpResponseException handling in HttpSolrClient is wrong

2015-03-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350607#comment-14350607
 ] 

Mark Miller commented on SOLR-7203:
---

We have to be careful here - we can't auto retry on NoHttpResponseException 
with updates - it means you don't know if the update was accepted or not.

 NoHttpResponseException handling in HttpSolrClient is wrong
 ---

 Key: SOLR-7203
 URL: https://issues.apache.org/jira/browse/SOLR-7203
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
Assignee: Alan Woodward
 Attachments: SOLR-7203.patch


 We've got logic in HttpSolrClient to catch NoHttpResponseException and retry. 
  However, this logic appears to be in the wrong place - it's in the 
 createMethod function, which doesn't actually execute any http requests at 
 all.  It ought to be in executeMethod.
 Fixing this might help sort out the persistent Jenkins failures as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6343) Missing character in DefaultSimilarity's javadoc

2015-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350641#comment-14350641
 ] 

ASF subversion and git services commented on LUCENE-6343:
-

Commit 1664687 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1664687 ]

LUCENE-6343: DefaultSimilarity javadocs has the wrong example for encode/decode 
precision loss

 Missing character in DefaultSimilarity's javadoc
 

 Key: LUCENE-6343
 URL: https://issues.apache.org/jira/browse/LUCENE-6343
 Project: Lucene - Core
  Issue Type: Bug
Reporter: András Péteri
Assignee: Michael McCandless
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6343.patch


 The part which describes precision loss of norm values is missing a 
 character; the encoded input value {{0.89}} in the example will actually be 
 decoded to {{0.875}}, not {{0.75}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6347) MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using regexpression syntax unwittingly)

2015-03-06 Thread Paul taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul taylor updated LUCENE-6347:

Description: 
MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
regexpression syntax unwittingly)

{code} 
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.queryparser.classic.MultiFieldQueryParser;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.util.Version;
import org.junit.Test;

import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;

/**
 * Lucene tests
 */
public class LuceneRegExParseTest
{

@Test
public void testSearch411LuceneBugReport() throws Exception
{
Exception e = null;
try
{
String[] fields = new String[2];
fields[0] = artist;
fields[1] = recording;

QueryParser qp = new MultiFieldQueryParser(Version.LUCENE_41, 
fields, new StandardAnalyzer(Version.LUCENE_41));
qp.parse(artist:pandora /reyli  recording:yo/Alguien);
}
catch(Exception ex)
{
e=ex;
}
assertNotNull(e);
assertTrue(e instanceof ParseException );
}
}
{code}
With assertions disabled this test fails as no exception is thrown.
With assertions enabled we get

{code}
java.lang.AssertionError
at 
org.apache.lucene.search.MultiTermQuery.init(MultiTermQuery.java:252)
at 
org.apache.lucene.search.AutomatonQuery.init(AutomatonQuery.java:65)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:90)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:69)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.newRegexpQuery(QueryParserBase.java:790)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.getRegexpQuery(QueryParserBase.java:1005)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:1075)
at 
org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:359)
at 
org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:258)
at 
org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:213)
at 
org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:171)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:120)
at 
org.musicbrainz.search.servlet.LuceneRegExParseTest.testSearch411LuceneBugReport(LuceneRegExParseTest.java:30)

but this should throw an exception without assertions enabled. Because no 
exception is thrown a search then faikls with the following stack trace

java.lang.NullPointerException
at java.util.TreeMap.getEntry(TreeMap.java:342)
at java.util.TreeMap.get(TreeMap.java:273)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:215)
at 
org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:58)
at 
org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
at 
org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
at org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:286)
at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:429)
at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:616)
at 
org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:663)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
{code}

  was:
MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
regexpression syntax unwittingly)
 
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.queryparser.classic.MultiFieldQueryParser;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.util.Version;
import org.junit.Test;

import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;

/**
 * Lucene tests
 */
public class LuceneRegExParseTest
{

@Test
public void testSearch411LuceneBugReport() throws Exception
{
Exception e = null;
try
{
String[] fields = new String[2];
fields[0] = artist;
fields[1] = recording;

QueryParser qp = new MultiFieldQueryParser(Version.LUCENE_41, 
fields, new StandardAnalyzer(Version.LUCENE_41));
qp.parse(artist:pandora /reyli  recording:yo/Alguien);
   

[jira] [Commented] (SOLR-7198) Deleting a collection during leader election results in left over znodes in ZK

2015-03-06 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350722#comment-14350722
 ] 

Gregory Chanan commented on SOLR-7198:
--

bq. 
https://github.com/apache/lucene-solr/blob/lucene_solr_4_10/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java#L138

[~vamsee] a quick tip: when you are on a page like this, press y to bring up 
the actual revision, i.e. 
https://github.com/apache/lucene-solr/blob/372e8448021d00d3466b45da8a6e57736354eee8/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java#L138

that way the page will be static and won't change from under you.

 Deleting a collection during leader election results in left over znodes in 
 ZK 
 ---

 Key: SOLR-7198
 URL: https://issues.apache.org/jira/browse/SOLR-7198
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Vamsee Yarlagadda

 I am seeing this issue while trying to run a collection delete operation in 
 the middle of leader election process.
 Contents of ZK (after issuing collection delete and waiting for some time)
 {code}
   /
 /aliases.json
 /clusterstate.json
 /collections
   SolrUpgrade_collection
 leaders
shard2_1
 /configs
 /live_nodes
 /overseer
 /overseer_elect
 /solr
 /solr.xml
 Contents of znode shard2_1:
 version0
 aversion0
 children_count0
 ctimeThu Mar 05 22:05:28 PST 2015 (1425621928169)
 cversion0
 czxid22815
 ephemeralOwner93427899815755800
 mtimeThu Mar 05 22:05:28 PST 2015 (1425621928169)
 mzxid22815
 pzxid22815
 dataLength194
 {
   core:SolrUpgrade_collection_shard2_1_replica2,
   node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
   base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr;
 }
 {code}
 Clusterstate.json before running a delete collection
 {code}
 {
   shards:{
 shard1:{
   range:8000-,
   state:active,
   replicas:{
 core_node1:{
   state:active,
   core:SolrUpgrade_collection_shard1_replica2,
   node_name:search-testing-c5-ha-4.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-4.vpc.cloudera.com:8983/solr;,
   leader:true},
 core_node2:{
   state:active,
   core:SolrUpgrade_collection_shard1_replica1,
   node_name:search-testing-c5-ha-2.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-2.vpc.cloudera.com:8983/solr}}},
 shard2_0:{
   range:0-3fff,
   state:active,
   replicas:{
 core_node5:{
   state:active,
   core:SolrUpgrade_collection_shard2_0_replica1,
   node_name:search-testing-c5-ha-3.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-3.vpc.cloudera.com:8983/solr;,
   leader:true},
 core_node7:{
   state:active,
   core:SolrUpgrade_collection_shard2_0_replica2,
   node_name:search-testing-c5-ha-4.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-4.vpc.cloudera.com:8983/solr}}},
 shard2_1:{
   range:4000-7fff,
   state:active,
   replicas:{
 core_node8:{
   state:active,
   core:SolrUpgrade_collection_shard2_1_replica2,
   node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr},
 core_node9:{
   state:active,
   core:SolrUpgrade_collection_shard2_1_replica3,
   node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr,
   maxShardsPerNode:10,
   router:compositeId,
   replicationFactor:2,
   autoAddReplicas:false,
   routerSpec:{name:compositeId}}}
 {code}
 As we can notice, shard (*shard2_1*) doesn't have any leader and i can see 
 from the logs that the replicas of the shard just started the leader election 
 process. And here are the Solr logs from one of the above replicas which 
 eventually becomes the leader and registers in ZK even though the collection 
 was deleted.
 {code}
 2015-03-05 22:05:25,383 INFO 
 org.apache.solr.cloud.ShardLeaderElectionContext: Running the leader process 
 for shard shard2_1
 2015-03-05 22:05:25,387 INFO 
 org.apache.solr.cloud.ShardLeaderElectionContext: Checking if I 
 (core=SolrUpgrade_collection_shard2_1_replica2,coreNodeName=core_node8) 
 should try and be the leader.
 2015-03-05 22:05:25,387 INFO 
 org.apache.solr.cloud.ShardLeaderElectionContext: My last published State was 
 Active, it's okay to be the leader.
 2015-03-05 22:05:25,387 INFO 
 org.apache.solr.cloud.ShardLeaderElectionContext: I may be 

[jira] [Updated] (SOLR-7198) Deleting a collection during leader election results in left over znodes in ZK

2015-03-06 Thread Vamsee Yarlagadda (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vamsee Yarlagadda updated SOLR-7198:

Description: 
I am seeing this issue while trying to run a collection delete operation in the 
middle of leader election process.

Contents of ZK (after issuing collection delete and waiting for some time)
{code}
  /
/aliases.json
/clusterstate.json
/collections
  SolrUpgrade_collection
leaders
   shard2_1
/configs
/live_nodes
/overseer
/overseer_elect
/solr
/solr.xml

Contents of znode shard2_1:

version0
aversion0
children_count0
ctimeThu Mar 05 22:05:28 PST 2015 (1425621928169)
cversion0
czxid22815
ephemeralOwner93427899815755800
mtimeThu Mar 05 22:05:28 PST 2015 (1425621928169)
mzxid22815
pzxid22815
dataLength194
{
  core:SolrUpgrade_collection_shard2_1_replica2,
  node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
  base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr;
}
{code}

Clusterstate.json before running a delete collection
{code}
{
  shards:{
shard1:{
  range:8000-,
  state:active,
  replicas:{
core_node1:{
  state:active,
  core:SolrUpgrade_collection_shard1_replica2,
  node_name:search-testing-c5-ha-4.vpc.cloudera.com:8983_solr,
  base_url:http://search-testing-c5-ha-4.vpc.cloudera.com:8983/solr;,
  leader:true},
core_node2:{
  state:active,
  core:SolrUpgrade_collection_shard1_replica1,
  node_name:search-testing-c5-ha-2.vpc.cloudera.com:8983_solr,
  
base_url:http://search-testing-c5-ha-2.vpc.cloudera.com:8983/solr}}},
shard2_0:{
  range:0-3fff,
  state:active,
  replicas:{
core_node5:{
  state:active,
  core:SolrUpgrade_collection_shard2_0_replica1,
  node_name:search-testing-c5-ha-3.vpc.cloudera.com:8983_solr,
  base_url:http://search-testing-c5-ha-3.vpc.cloudera.com:8983/solr;,
  leader:true},
core_node7:{
  state:active,
  core:SolrUpgrade_collection_shard2_0_replica2,
  node_name:search-testing-c5-ha-4.vpc.cloudera.com:8983_solr,
  
base_url:http://search-testing-c5-ha-4.vpc.cloudera.com:8983/solr}}},
shard2_1:{
  range:4000-7fff,
  state:active,
  replicas:{
core_node8:{
  state:active,
  core:SolrUpgrade_collection_shard2_1_replica2,
  node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
  
base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr},
core_node9:{
  state:active,
  core:SolrUpgrade_collection_shard2_1_replica3,
  node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
  
base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr,
  maxShardsPerNode:10,
  router:compositeId,
  replicationFactor:2,
  autoAddReplicas:false,
  routerSpec:{name:compositeId}}}
{code}

As we can notice, shard (*shard2_1*) doesn't have any leader and i can see from 
the logs that the replicas of the shard just started the leader election 
process. And here are the Solr logs from one of the above replicas which 
eventually becomes the leader and registers in ZK even though the collection 
was deleted.
{code}
2015-03-05 22:05:25,383 INFO org.apache.solr.cloud.ShardLeaderElectionContext: 
Running the leader process for shard shard2_1
2015-03-05 22:05:25,387 INFO org.apache.solr.cloud.ShardLeaderElectionContext: 
Checking if I 
(core=SolrUpgrade_collection_shard2_1_replica2,coreNodeName=core_node8) should 
try and be the leader.
2015-03-05 22:05:25,387 INFO org.apache.solr.cloud.ShardLeaderElectionContext: 
My last published State was Active, it's okay to be the leader.
2015-03-05 22:05:25,387 INFO org.apache.solr.cloud.ShardLeaderElectionContext: 
I may be the new leader - try and sync
2015-03-05 22:05:25,506 INFO org.apache.solr.common.cloud.ZkStateReader: A 
cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/clusterstate.json, has occurred - updating... (live nodes size: 4)
2015-03-05 22:05:25,620 INFO org.apache.solr.core.SolrCore.Request: 
[SolrUpgrade_collection_shard2_1_replica2] webapp=/solr path=/select 
params={q=*:*distrib=falsewt=javabinversion=2} hits=118 status=0 QTime=1 
2015-03-05 22:05:25,623 INFO org.apache.solr.core.SolrCore.Request: 
[SolrUpgrade_collection_shard2_1_replica3] webapp=/solr path=/select 
params={q=*:*distrib=falsewt=javabinversion=2} hits=118 status=0 QTime=0 
2015-03-05 22:05:27,392 INFO org.apache.solr.cloud.ElectionContext: canceling 
election 
/collections/SolrUpgrade_collection/leader_elect/shard2_1/election/93427899815755803-core_node8-n_01
2015-03-05 22:05:27,393 INFO org.apache.solr.core.SolrCore: 
[SolrUpgrade_collection_shard2_1_replica3]  CLOSING SolrCore 

[jira] [Commented] (SOLR-7198) Deleting a collection during leader election results in left over znodes in ZK

2015-03-06 Thread Vamsee Yarlagadda (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350730#comment-14350730
 ] 

Vamsee Yarlagadda commented on SOLR-7198:
-

Thanks for highlighting this. I will update the description to reflect this.

 Deleting a collection during leader election results in left over znodes in 
 ZK 
 ---

 Key: SOLR-7198
 URL: https://issues.apache.org/jira/browse/SOLR-7198
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Vamsee Yarlagadda

 I am seeing this issue while trying to run a collection delete operation in 
 the middle of leader election process.
 Contents of ZK (after issuing collection delete and waiting for some time)
 {code}
   /
 /aliases.json
 /clusterstate.json
 /collections
   SolrUpgrade_collection
 leaders
shard2_1
 /configs
 /live_nodes
 /overseer
 /overseer_elect
 /solr
 /solr.xml
 Contents of znode shard2_1:
 version0
 aversion0
 children_count0
 ctimeThu Mar 05 22:05:28 PST 2015 (1425621928169)
 cversion0
 czxid22815
 ephemeralOwner93427899815755800
 mtimeThu Mar 05 22:05:28 PST 2015 (1425621928169)
 mzxid22815
 pzxid22815
 dataLength194
 {
   core:SolrUpgrade_collection_shard2_1_replica2,
   node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
   base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr;
 }
 {code}
 Clusterstate.json before running a delete collection
 {code}
 {
   shards:{
 shard1:{
   range:8000-,
   state:active,
   replicas:{
 core_node1:{
   state:active,
   core:SolrUpgrade_collection_shard1_replica2,
   node_name:search-testing-c5-ha-4.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-4.vpc.cloudera.com:8983/solr;,
   leader:true},
 core_node2:{
   state:active,
   core:SolrUpgrade_collection_shard1_replica1,
   node_name:search-testing-c5-ha-2.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-2.vpc.cloudera.com:8983/solr}}},
 shard2_0:{
   range:0-3fff,
   state:active,
   replicas:{
 core_node5:{
   state:active,
   core:SolrUpgrade_collection_shard2_0_replica1,
   node_name:search-testing-c5-ha-3.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-3.vpc.cloudera.com:8983/solr;,
   leader:true},
 core_node7:{
   state:active,
   core:SolrUpgrade_collection_shard2_0_replica2,
   node_name:search-testing-c5-ha-4.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-4.vpc.cloudera.com:8983/solr}}},
 shard2_1:{
   range:4000-7fff,
   state:active,
   replicas:{
 core_node8:{
   state:active,
   core:SolrUpgrade_collection_shard2_1_replica2,
   node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr},
 core_node9:{
   state:active,
   core:SolrUpgrade_collection_shard2_1_replica3,
   node_name:search-testing-c5-ha-1.vpc.cloudera.com:8983_solr,
   
 base_url:http://search-testing-c5-ha-1.vpc.cloudera.com:8983/solr,
   maxShardsPerNode:10,
   router:compositeId,
   replicationFactor:2,
   autoAddReplicas:false,
   routerSpec:{name:compositeId}}}
 {code}
 As we can notice, shard (*shard2_1*) doesn't have any leader and i can see 
 from the logs that the replicas of the shard just started the leader election 
 process. And here are the Solr logs from one of the above replicas which 
 eventually becomes the leader and registers in ZK even though the collection 
 was deleted.
 {code}
 2015-03-05 22:05:25,383 INFO 
 org.apache.solr.cloud.ShardLeaderElectionContext: Running the leader process 
 for shard shard2_1
 2015-03-05 22:05:25,387 INFO 
 org.apache.solr.cloud.ShardLeaderElectionContext: Checking if I 
 (core=SolrUpgrade_collection_shard2_1_replica2,coreNodeName=core_node8) 
 should try and be the leader.
 2015-03-05 22:05:25,387 INFO 
 org.apache.solr.cloud.ShardLeaderElectionContext: My last published State was 
 Active, it's okay to be the leader.
 2015-03-05 22:05:25,387 INFO 
 org.apache.solr.cloud.ShardLeaderElectionContext: I may be the new leader - 
 try and sync
 2015-03-05 22:05:25,506 INFO org.apache.solr.common.cloud.ZkStateReader: A 
 cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 4)
 2015-03-05 22:05:25,620 INFO org.apache.solr.core.SolrCore.Request: 
 [SolrUpgrade_collection_shard2_1_replica2] 

[jira] [Commented] (LUCENE-6319) Delegating OneMerge

2015-03-06 Thread Elliott Bradshaw (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350737#comment-14350737
 ] 

Elliott Bradshaw commented on LUCENE-6319:
--

No thoughts on this?  I'll admit, I'm a bit new to the Index API, so if for 
some reason this wouldn't work I totally understand.

 Delegating OneMerge
 ---

 Key: LUCENE-6319
 URL: https://issues.apache.org/jira/browse/LUCENE-6319
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Elliott Bradshaw

 In trying to integrate SortingMergePolicy into ElasticSearch, I ran into an 
 issue where the custom merge logic was being stripped out by 
 IndexUpgraderMergeSpecification.  Related issue here:
 https://github.com/elasticsearch/elasticsearch/issues/9731
 In an endeavor to fix this, I attempted to create a DelegatingOneMerge that 
 could be used to chain the different MergePolicies together.  I quickly 
 discovered this to be impossible, due to the direct member variable access of 
 OneMerge by IndexWriter and other classes.  It would be great if this 
 variable access could be privatized and the consuming classes modified to use 
 the appropriate getters and setters.  Here's an example DelegatingOneMerge 
 and modified OneMerge.
 https://gist.github.com/ebradshaw/e0b74e9e8d4976ab9e0a
 https://gist.github.com/ebradshaw/d72116a014f226076303
 The downside here is that this would require an API change, as there are 
 three public variables in OneMerge: estimatedMergeBytes, segments and 
 totalDocCount.  These would have to be moved behind public getters.
 Without this change, I'm not sure how we could get the SortingMergePolicy 
 working in ES, but if anyone has any other suggestions I'm all ears!  Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6343) Missing character in DefaultSimilarity's javadoc

2015-03-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350619#comment-14350619
 ] 

Michael McCandless commented on LUCENE-6343:


Thanks @apeteri you are correct, I'll add a test and commit.

 Missing character in DefaultSimilarity's javadoc
 

 Key: LUCENE-6343
 URL: https://issues.apache.org/jira/browse/LUCENE-6343
 Project: Lucene - Core
  Issue Type: Bug
Reporter: András Péteri
Assignee: Michael McCandless
Priority: Minor
 Attachments: LUCENE-6343.patch


 The part which describes precision loss of norm values is missing a 
 character; the encoded input value {{0.89}} in the example will actually be 
 decoded to {{0.875}}, not {{0.75}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6343) Missing character in DefaultSimilarity's javadoc

2015-03-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350619#comment-14350619
 ] 

Michael McCandless edited comment on LUCENE-6343 at 3/6/15 5:44 PM:


Thanks [~apeteri] you are correct, I'll add a test and commit.


was (Author: mikemccand):
Thanks @apeteri you are correct, I'll add a test and commit.

 Missing character in DefaultSimilarity's javadoc
 

 Key: LUCENE-6343
 URL: https://issues.apache.org/jira/browse/LUCENE-6343
 Project: Lucene - Core
  Issue Type: Bug
Reporter: András Péteri
Assignee: Michael McCandless
Priority: Minor
 Attachments: LUCENE-6343.patch


 The part which describes precision loss of norm values is missing a 
 character; the encoded input value {{0.89}} in the example will actually be 
 decoded to {{0.875}}, not {{0.75}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4942) Indexed non-point shapes index excessive terms

2015-03-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350585#comment-14350585
 ] 

David Smiley edited comment on LUCENE-4942 at 3/6/15 5:47 PM:
--

I'd like to solve this issue (excessive terms) and _also_ address 
differentiating between fully-contained leaves vs approximated leaves (for 
LUCENE-5776) in one go tracked on this issue to avoid dealing with back-compat 
more than once.  That is, just once we change how PrefixTree derivative 
strategies encode the term data, instead of doing over more than one issue.  
And I'm thinking on trunk wouldn't worry about the back-compat (it is trunk 
after all), and then the port to 5x would have to consider it -- the down-side 
being some spatial code on trunk vs 5x may vary a bit.  Perhaps the back-compat 
detection in 5x would work via a check for Version similar to Analyzer's having 
a version property that can optionally be set.

I'm not sure how contentious it may be to simply forgo back-compat.  _Just_ 
re-index.  And you're not affected if all you have is point data, which seems 
to be at least 80% of the users using spatial.  And you're furthermore not 
affected if your pre-existing indexes have non-point data but the only 
predicate you use is Intersects (no Contains, no Within, no heatmaps). Again 
I'd guess that lobs off another 80% of users since Intersects is so common.


was (Author: dsmiley):
I'd like to solve this issue (excessive terms) and _also_ address 
differentiating between fully-contained leaves vs approximated leaves (for 
LUCENE-5776) in one go tracked on this issue to avoid dealing with back-compat 
more than once.  That is, just once we change how PrefixTree derivative 
strategies encode the term data, instead of doing over more than one issue.  
And I'm thinking on trunk wouldn't worry about the back-compat (it is trunk 
after all), and then the port to 5x would have to consider it -- the down-side 
being some spatial code on trunk vs 5x may vary a bit.  Perhaps the back-compat 
detection in 5x would work via a check for Version similar to Analyzer's having 
a version property that can optionally be set.

I'm not sure how contentious it may be to simply forgo back-compat.  _Just_ 
re-index.  And you're not affected if all you have is point data, which seems 
to be at least 80% of the users using spatial.

 Indexed non-point shapes index excessive terms
 --

 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley

 Indexed non-point shapes are comprised of a set of terms that represent grid 
 cells.  Cells completely within the shape or cells on the intersecting edge 
 that are at the maximum detail depth being indexed for the shape are denoted 
 as leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens 
 are actually indexed twice_, one with the leaf byte and one without.
 The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
 cells and so the tokens with '+' are completely redundant.
 The Recursive [algorithm] based PrefixTree Strategy better supports correct 
 search of indexed non-point shapes than TermQuery does and the distinction is 
 relevant.  However, the foundational search algorithms used by this strategy 
 (Intersects  Contains; the other 2 are based on these) could each be 
 upgraded to deal with this correctly.  Not trivial but very doable.
 In the end, spatial non-point indexes can probably be trimmed my ~40% by 
 doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6724) HttpServer maxRetries attributes seems like not being used as expected

2015-03-06 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-6724:

Attachment: SOLR-6724.patch

A straightforward patch moving retries logic from createMethod to executeMethod 
and a unit test to confirm that it works

 HttpServer maxRetries attributes seems like not being used as expected
 --

 Key: SOLR-6724
 URL: https://issues.apache.org/jira/browse/SOLR-6724
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10.2
 Environment: OS X 10.9.5
 Java 1.7.0_60
Reporter: Márcio Furlani Carmona
Priority: Minor
 Attachments: SOLR-6724.patch


 Looks like maxRetries is being misused in the 
 org.apache.solr.client.solrj.impl.HttpSolrServer.createMethod(SolrRequest) 
 instead of being used in the executeMethod(HttpRequestBase,ResponseParser).
 In the current implementation the maxRetries is used in a loop that only 
 instantiates the HttpRequestBase but it doesn't effectively make any HTTP 
 request. Also the retries are made even in a successful instantiation  of the 
 HttpRequestBase as there's no break too.
 I notice there's also a catch for NoHttpResponseException but as no HTTP 
 request is made I guess it will never happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b22) - Build # 11928 - Failure!

2015-03-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11928/
Java: 32bit/jdk1.8.0_40-ea-b22 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([64ADBFEEB3B8D0AE:ECF980341D44BD56]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:222)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Assigned] (LUCENE-6343) Missing character in DefaultSimilarity's javadoc

2015-03-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-6343:
--

Assignee: Michael McCandless

 Missing character in DefaultSimilarity's javadoc
 

 Key: LUCENE-6343
 URL: https://issues.apache.org/jira/browse/LUCENE-6343
 Project: Lucene - Core
  Issue Type: Bug
Reporter: András Péteri
Assignee: Michael McCandless
Priority: Minor
 Attachments: LUCENE-6343.patch


 The part which describes precision loss of norm values is missing a 
 character; the encoded input value {{0.89}} in the example will actually be 
 decoded to {{0.875}}, not {{0.75}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6346) always initCause() ParseExceptions from Version.java

2015-03-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6346.
-
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

 always initCause() ParseExceptions from Version.java
 

 Key: LUCENE-6346
 URL: https://issues.apache.org/jira/browse/LUCENE-6346
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6346.patch


 This is only done some of the time, we should do it always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6343) Missing character in DefaultSimilarity's javadoc

2015-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350644#comment-14350644
 ] 

ASF subversion and git services commented on LUCENE-6343:
-

Commit 1664688 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1664688 ]

LUCENE-6343: DefaultSimilarity javadocs has the wrong example for encode/decode 
precision loss

 Missing character in DefaultSimilarity's javadoc
 

 Key: LUCENE-6343
 URL: https://issues.apache.org/jira/browse/LUCENE-6343
 Project: Lucene - Core
  Issue Type: Bug
Reporter: András Péteri
Assignee: Michael McCandless
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6343.patch


 The part which describes precision loss of norm values is missing a 
 character; the encoded input value {{0.89}} in the example will actually be 
 decoded to {{0.875}}, not {{0.75}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2726 - Still Failing

2015-03-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2726/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([C079BE49BBD8562B:482D819315243BD3]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:901)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:754)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Comment Edited] (SOLR-6724) HttpServer maxRetries attributes seems like not being used as expected

2015-03-06 Thread Greg Solovyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350661#comment-14350661
 ] 

Greg Solovyev edited comment on SOLR-6724 at 3/6/15 6:19 PM:
-

I just added a straightforward patch moving retries logic from createMethod to 
executeMethod and a unit test to confirm that it works


was (Author: grishick):
A straightforward patch moving retries logic from createMethod to executeMethod 
and a unit test to confirm that it works

 HttpServer maxRetries attributes seems like not being used as expected
 --

 Key: SOLR-6724
 URL: https://issues.apache.org/jira/browse/SOLR-6724
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10.2
 Environment: OS X 10.9.5
 Java 1.7.0_60
Reporter: Márcio Furlani Carmona
Priority: Minor
 Attachments: SOLR-6724.patch


 Looks like maxRetries is being misused in the 
 org.apache.solr.client.solrj.impl.HttpSolrServer.createMethod(SolrRequest) 
 instead of being used in the executeMethod(HttpRequestBase,ResponseParser).
 In the current implementation the maxRetries is used in a loop that only 
 instantiates the HttpRequestBase but it doesn't effectively make any HTTP 
 request. Also the retries are made even in a successful instantiation  of the 
 HttpRequestBase as there's no break too.
 I notice there's also a catch for NoHttpResponseException but as no HTTP 
 request is made I guess it will never happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7204) Improve error handling in create collection API

2015-03-06 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-7204:
--

 Summary: Improve error handling in create collection API
 Key: SOLR-7204
 URL: https://issues.apache.org/jira/browse/SOLR-7204
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Hrishikesh Gadre
Priority: Minor


I was trying to create a collection on a Solrcloud deployed along with 
kerberized Hadoop cluster. I kept on getting following error,

org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
CREATEing SolrCore 'orders_shard1_replica2': Unable to create core 
[orders_shard1_replica2] Caused by: Lock obtain timed out: 
org.apache.solr.store.hdfs.HdfsLockFactory$HdfsLock@451997e1

On careful analysis of logs, I realized it was due to Solr not being able to 
talk to HDFS properly because of following error,

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target

We should improve the error handling such that we return the root-cause of the 
error (in this case SSLHandshakeException instead of lock timeout exception).


 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4942) Indexed non-point shapes index excessive terms

2015-03-06 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350763#comment-14350763
 ] 

Ryan McKinley commented on LUCENE-4942:
---

+1

bq. I'm not sure how contentious it may be to simply forgo back-compat

I expect anyone would would be affected would not rejoice at the tradeoff.  As 
is, the people who would be affected either have very few documents or 
*ginormous* indexes.

We could take a poll on solr-user to see if anyone is using RPT for non-points 
and a query is not Intersects (no need to worry about heatmaps... it has not 
been released yet!)


 Indexed non-point shapes index excessive terms
 --

 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley

 Indexed non-point shapes are comprised of a set of terms that represent grid 
 cells.  Cells completely within the shape or cells on the intersecting edge 
 that are at the maximum detail depth being indexed for the shape are denoted 
 as leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens 
 are actually indexed twice_, one with the leaf byte and one without.
 The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
 cells and so the tokens with '+' are completely redundant.
 The Recursive [algorithm] based PrefixTree Strategy better supports correct 
 search of indexed non-point shapes than TermQuery does and the distinction is 
 relevant.  However, the foundational search algorithms used by this strategy 
 (Intersects  Contains; the other 2 are based on these) could each be 
 upgraded to deal with this correctly.  Not trivial but very doable.
 In the end, spatial non-point indexes can probably be trimmed my ~40% by 
 doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350840#comment-14350840
 ] 

David Smiley commented on LUCENE-6328:
--

FWIW that does sounds more natural.  Filter's don't score; Query's do (or can).

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7175) optimize maxSegments=2/ results in more than 2 segments after optimize finishes

2015-03-06 Thread Tom Burton-West (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350835#comment-14350835
 ] 

Tom Burton-West commented on SOLR-7175:
---

Hi Mike,
Thanks for taking a look.  We found a race condition in our code that resulted 
in the driver thinking all the indexers were finished when they sometimes 
weren't.  It just happened that we inserted this bug in the code about the time 
we switched from Solr 3.6 to Solr 4.10.2 so I jumped to the wrong conclusion.  
I'll go ahead and close the issue.

Tom

 optimize maxSegments=2/ results in more than 2 segments after optimize 
 finishes
 ---

 Key: SOLR-7175
 URL: https://issues.apache.org/jira/browse/SOLR-7175
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
 Environment: linux
Reporter: Tom Burton-West
Priority: Minor
 Attachments: build-1.indexwriterlog.2015-02-23.gz, 
 build-4.iw.2015-02-25.txt.gz, solr4.shotz


 After finishing indexing and running a commit, we issue an optimize 
 maxSegments=2/ to Solr.  With Solr 4.10.2 we are seeing one or two shards 
 (out of 12) with 3 or 4 segments after the optimize finishes.  There are no 
 errors in the Solr logs or indexwriter logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4942) Indexed non-point shapes index excessive terms

2015-03-06 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350763#comment-14350763
 ] 

Ryan McKinley edited comment on LUCENE-4942 at 3/6/15 7:25 PM:
---

+1

bq. I'm not sure how contentious it may be to simply forgo back-compat

I expect anyone would would be affected would rejoice at the tradeoff.  As is, 
the people who would be affected either have very few documents or *ginormous* 
indexes.

We could take a poll on solr-user to see if anyone is using RPT for non-points 
and a query is not Intersects (no need to worry about heatmaps... it has not 
been released yet!)



was (Author: ryantxu):
+1

bq. I'm not sure how contentious it may be to simply forgo back-compat

I expect anyone would would be affected would not rejoice at the tradeoff.  As 
is, the people who would be affected either have very few documents or 
*ginormous* indexes.

We could take a poll on solr-user to see if anyone is using RPT for non-points 
and a query is not Intersects (no need to worry about heatmaps... it has not 
been released yet!)


 Indexed non-point shapes index excessive terms
 --

 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley

 Indexed non-point shapes are comprised of a set of terms that represent grid 
 cells.  Cells completely within the shape or cells on the intersecting edge 
 that are at the maximum detail depth being indexed for the shape are denoted 
 as leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens 
 are actually indexed twice_, one with the leaf byte and one without.
 The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
 cells and so the tokens with '+' are completely redundant.
 The Recursive [algorithm] based PrefixTree Strategy better supports correct 
 search of indexed non-point shapes than TermQuery does and the distinction is 
 relevant.  However, the foundational search algorithms used by this strategy 
 (Intersects  Contains; the other 2 are based on these) could each be 
 upgraded to deal with this correctly.  Not trivial but very doable.
 In the end, spatial non-point indexes can probably be trimmed my ~40% by 
 doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent

2015-03-06 Thread Xu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350824#comment-14350824
 ] 

Xu Zhang commented on SOLR-6350:


Sorry for losing tracking of this Jira. I can work on this this weekend. 

[~hossman] Any quick feedback?

 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-xu.patch, SOLR-6350-xu.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7175) optimize maxSegments=2/ results in more than 2 segments after optimize finishes

2015-03-06 Thread Tom Burton-West (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Burton-West closed SOLR-7175.
-
Resolution: Not a Problem

Problem was in our client code erroneously sending items to Solr to index after 
sending the optimize command.  Not a Solr issue.

 optimize maxSegments=2/ results in more than 2 segments after optimize 
 finishes
 ---

 Key: SOLR-7175
 URL: https://issues.apache.org/jira/browse/SOLR-7175
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
 Environment: linux
Reporter: Tom Burton-West
Priority: Minor
 Attachments: build-1.indexwriterlog.2015-02-23.gz, 
 build-4.iw.2015-02-25.txt.gz, solr4.shotz


 After finishing indexing and running a commit, we issue an optimize 
 maxSegments=2/ to Solr.  With Solr 4.10.2 we are seeing one or two shards 
 (out of 12) with 3 or 4 segments after the optimize finishes.  There are no 
 errors in the Solr logs or indexwriter logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1991 - Still Failing!

2015-03-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1991/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.util.TestUtils.testNanoTimeSpeed

Error Message:
Time taken for System.nanoTime is too high

Stack Trace:
java.lang.AssertionError: Time taken for System.nanoTime is too high
at 
__randomizedtesting.SeedInfo.seed([D4F9E838BF9130DD:A203FC0EB33AEB8D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.util.TestUtils.testNanoTimeSpeed(TestUtils.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestUtils

Error Message:
90 threads leaked from SUITE scope at org.apache.solr.util.TestUtils: 1) 
Thread[id=5971, name=nanoTimeTestThread-2962-thread-25, state=TIMED_WAITING, 
group=TGRP-TestUtils] at sun.misc.Unsafe.park(Native 

[jira] [Commented] (LUCENE-6328) Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException

2015-03-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350866#comment-14350866
 ] 

Robert Muir commented on LUCENE-6328:
-

that doesnt quite work because the two are incongruent: Weight is top-level and 
DocIdSet is per-segment.

 Make Filter.clone and Filter.setBoost throw an UnsupportedOperationException
 

 Key: LUCENE-6328
 URL: https://issues.apache.org/jira/browse/LUCENE-6328
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6328.patch


 The rewrite process uses a combination of calls to clone() and 
 setBoost(boost) in order to rewrite queries. This is a bit weird for filters 
 given that they were not originally designed to care about scoring.
 Using a filter directly as a query fails unit tests today since filters do 
 not pass the QueryUtils checks: it is expected that cloning and changing the 
 boost results in an instance which is unequal. However existing filters do 
 not take into account the getBoost() parameter inherited from Query so this 
 test fails.
 I think it would be less error-prone to throw an 
 UnsupportedOperationException for clone() and setBoost() on filters and 
 disable the check in QueryUtils for filters.
 In order to keep rewriting working, filters could rewrite to a CSQ around 
 themselves so that clone() and setBoost() would be called on the CSQ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6319) Delegating OneMerge

2015-03-06 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350875#comment-14350875
 ] 

Ryan Ernst commented on LUCENE-6319:


[~ebradshaw] Sorry I haven't had time to look. I plan to take a look over the 
next week or 2.  

 Delegating OneMerge
 ---

 Key: LUCENE-6319
 URL: https://issues.apache.org/jira/browse/LUCENE-6319
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Elliott Bradshaw

 In trying to integrate SortingMergePolicy into ElasticSearch, I ran into an 
 issue where the custom merge logic was being stripped out by 
 IndexUpgraderMergeSpecification.  Related issue here:
 https://github.com/elasticsearch/elasticsearch/issues/9731
 In an endeavor to fix this, I attempted to create a DelegatingOneMerge that 
 could be used to chain the different MergePolicies together.  I quickly 
 discovered this to be impossible, due to the direct member variable access of 
 OneMerge by IndexWriter and other classes.  It would be great if this 
 variable access could be privatized and the consuming classes modified to use 
 the appropriate getters and setters.  Here's an example DelegatingOneMerge 
 and modified OneMerge.
 https://gist.github.com/ebradshaw/e0b74e9e8d4976ab9e0a
 https://gist.github.com/ebradshaw/d72116a014f226076303
 The downside here is that this would require an API change, as there are 
 three public variables in OneMerge: estimatedMergeBytes, segments and 
 totalDocCount.  These would have to be moved behind public getters.
 Without this change, I'm not sure how we could get the SortingMergePolicy 
 working in ES, but if anyone has any other suggestions I'm all ears!  Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2727 - Still Failing

2015-03-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2727/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:28340/_/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:28340/_/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([60C2A948E5D4B551:E89696924B28D8A9]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:597)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:920)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:811)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:754)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

  1   2   >