[jira] [Created] (SOLR-6429) Unload core during tlog replay

2014-08-25 Thread Eran H (JIRA)
Eran H created SOLR-6429:


 Summary: Unload core during tlog replay
 Key: SOLR-6429
 URL: https://issues.apache.org/jira/browse/SOLR-6429
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Eran H


Hi,
I use solr 4.8.1 and solrj . When my server comes up it starts solr. If my 
server run configuration is to delete all data and start again fresh, I unload 
my solr core and then create it again. During this time, solr comes up.
Solr receives the core unload request while replaying its transaction log. 

Any logical behavior would be accepted in this point:
1. Exception that it cannot drop the core because it's in the middle of log 
replaying.
2. Wait until the log replay is finish and then drop the core.
3. FORCE to drop the core and stop the replay.
4. Provide API so I would know to question solr if it's busy.
5. Etc.

What really happens is the only illogical behavior:
I get a timeout exception(!) but it continues to replay the logs. 
core.properties file is deleted, core.properties.unloaded file is created,  and 
the folder is not deleted as I asked (using solrj). I can't delete the folder 
myself because it's locked, and I'm basically stuck with the core folder but 
with an unloaded core. I can't create the core again because the folder already 
exists, and I can't unload the core again because it does not exist!

If solr receives a core unload request during tlog replay, it should either 
reject it with a dedicated exception (not timeout) or process it fully. 
Currently it tries to do both and it won't work.

Thanks!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



org.apache.lucene.analysis.Analyzer

2014-08-25 Thread DVHV Sekhar
Please add public method Analyzer#createComponents(Analyzer analyzer, String 
fieldName,
      Reader reader) to org.apache.lucene.analysis.Analyzer

This required to decorate any existing analyzers. Or make existing 
createComponents method as public.

Thanks,
Sekhar 

RE: org.apache.lucene.analysis.Analyzer

2014-08-25 Thread Uwe Schindler
Hi,

 

for decorating, use the class AnalyzerWrapper. This has protected methods to 
override: wrapComponents(), wrapReader(). getWrappedAnalyzer() must return the 
analyzer you want to wrap.

 

See ShingleAnalyzerWrapper as an example: http://goo.gl/4DcBQS

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de http://www.thetaphi.de/ 

eMail: u...@thetaphi.de

 

From: DVHV Sekhar [mailto:dvhv_sek...@yahoo.com.INVALID] 
Sent: Monday, August 25, 2014 9:11 AM
To: dev@lucene.apache.org; dev-subscr...@lucene.apache.org
Subject: org.apache.lucene.analysis.Analyzer

 

Please add public method Analyzer#createComponents(Analyzer analyzer, String 
fieldName,

  Reader reader) to org.apache.lucene.analysis.Analyzer

 

This required to decorate any existing analyzers. Or make existing 
createComponents method as public.

 

Thanks,

Sekhar 

 



[jira] [Commented] (LUCENE-5899) Caused by: java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum

2014-08-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108841#comment-14108841
 ] 

Littlestar commented on LUCENE-5899:


I test lucene on shenandoah with big memory + bigdata(32g, 128g).   
http://openjdk.java.net/jeps/189
http://icedtea.classpath.org/hg/shenandoah/

It throws above exception.
I see the luence code, I think the exception is ok.
Because org.apache.lucene.codecs.MappingMultiDocsEnum is not instanceof  
org.apache.lucene.index.DocsAndPositionsEnum.


 Caused by: java.lang.ClassCastException: 
 org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
 org.apache.lucene.index.DocsAndPositionsEnum
 -

 Key: LUCENE-5899
 URL: https://issues.apache.org/jira/browse/LUCENE-5899
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/codecs
Affects Versions: 4.9
Reporter: Littlestar
 Fix For: 4.10


 Exception in thread Lucene Merge Thread #0 
 org.apache.lucene.index.MergePolicy$MergeException: 
 java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
 cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
 Caused by: java.lang.ClassCastException: 
 org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
 org.apache.lucene.index.DocsAndPositionsEnum
   at 
 org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
   at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
   at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
   at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5899) Caused by: java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum

2014-08-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108841#comment-14108841
 ] 

Littlestar edited comment on LUCENE-5899 at 8/25/14 7:22 AM:
-

I test lucene on shenandoah with big memory + bigdata(128G, 1.2T).   
http://openjdk.java.net/jeps/189
http://icedtea.classpath.org/hg/shenandoah/

It throws above exception.
I see the luence code, I think the exception is ok.
Because org.apache.lucene.codecs.MappingMultiDocsEnum is not instanceof  
org.apache.lucene.index.DocsAndPositionsEnum.



was (Author: cnstar9988):
I test lucene on shenandoah with big memory + bigdata(32g, 128g).   
http://openjdk.java.net/jeps/189
http://icedtea.classpath.org/hg/shenandoah/

It throws above exception.
I see the luence code, I think the exception is ok.
Because org.apache.lucene.codecs.MappingMultiDocsEnum is not instanceof  
org.apache.lucene.index.DocsAndPositionsEnum.


 Caused by: java.lang.ClassCastException: 
 org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
 org.apache.lucene.index.DocsAndPositionsEnum
 -

 Key: LUCENE-5899
 URL: https://issues.apache.org/jira/browse/LUCENE-5899
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/codecs
Affects Versions: 4.9
Reporter: Littlestar
 Fix For: 4.10


 Exception in thread Lucene Merge Thread #0 
 org.apache.lucene.index.MergePolicy$MergeException: 
 java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
 cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
 Caused by: java.lang.ClassCastException: 
 org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
 org.apache.lucene.index.DocsAndPositionsEnum
   at 
 org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
   at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
   at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
   at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 1753 - Failure!

2014-08-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1753/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 32537 lines...]
-check-forbidden-all:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.7
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.7
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.3
[forbidden-apis] Reading API signatures: 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning for API signatures and dependencies...
[forbidden-apis] Forbidden method invocation: 
java.text.DecimalFormat#init(java.lang.String) [Uses default locale]
[forbidden-apis]   in org.apache.solr.update.UpdateLog$LogReplayer 
(UpdateLog.java:1272)
[forbidden-apis] Scanned  (and 1523 related) class file(s) for forbidden 
API invocations (in 7.91s), 1 error(s).

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:485: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:73: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build.xml:271: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:477: 
Check for forbidden API calls failed, see log.

Total time: 191 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0 
-XX:-UseCompressedOops -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5899) Caused by: java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum

2014-08-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108849#comment-14108849
 ] 

Uwe Schindler commented on LUCENE-5899:
---

If it works with other garbage collectors it is more likely a bug in 
Shenandoah. The other one, G1GC, also has many bugs and throws crazy exceptions 
in some cases (like a NullPointerException where nothing can be null).

From my view at the code, the casts are correct, it is just something Hotspot 
fails to handle correctly.

Please open bug report at Oracle. We have not yet tested Lucene with this GC, 
because it is not yet part of official JDK9 preview releases.

 Caused by: java.lang.ClassCastException: 
 org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
 org.apache.lucene.index.DocsAndPositionsEnum
 -

 Key: LUCENE-5899
 URL: https://issues.apache.org/jira/browse/LUCENE-5899
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/codecs
Affects Versions: 4.9
Reporter: Littlestar
 Fix For: 4.10


 Exception in thread Lucene Merge Thread #0 
 org.apache.lucene.index.MergePolicy$MergeException: 
 java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
 cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
 Caused by: java.lang.ClassCastException: 
 org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
 org.apache.lucene.index.DocsAndPositionsEnum
   at 
 org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
   at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
   at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
   at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5905) Different behaviour of JapaneseAnalyzer at indexing time vs. at search time

2014-08-25 Thread Trejkaz (JIRA)
Trejkaz created LUCENE-5905:
---

 Summary: Different behaviour of JapaneseAnalyzer at indexing time 
vs. at search time
 Key: LUCENE-5905
 URL: https://issues.apache.org/jira/browse/LUCENE-5905
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 4.9, 3.6.2
 Environment: Java 8u5
Reporter: Trejkaz


A document with the word 秋葉原 in the body, when analysed by the JapaneseAnalyzer 
(AKA Kuromoji), cannot be found when searching for the same text as a phrase 
query.

Two programs are provided to reproduce the issue. Both programs print out the 
term docs and positions and then the result of parsing the phrase query.

As shown by the output, at analysis time, there is a lone Japanese term 秋葉原. 
At query parsing time, there are *three* such terms - 秋葉 and 秋葉原 at 
position 0 and 原 at position 1. Because all terms must be present for a 
phrase query to be a match, the query never matches, which is quite a serious 
issue for us.

*Any workarounds, no matter how hacky, would be extremely helpful at this 
point.*

My guess is that this is a quirk with the analyser. If it happened with 
StandardAnalyzer, surely someone would have discovered it before I did.

Lucene 3.6.2 reproduction:

{code:java}
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.ja.JapaneseAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermEnum;
import org.apache.lucene.index.TermPositions;
import org.apache.lucene.queryParser.standard.StandardQueryParser;
import 
org.apache.lucene.queryParser.standard.config.StandardQueryConfigHandler;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
import org.junit.Test;

import static org.hamcrest.Matchers.*;
import static org.junit.Assert.*;

public class TestJapaneseAnalysis {
   @Test
public void testJapaneseAnalysis() throws Exception {
try (Directory directory = new RAMDirectory()) {
Analyzer analyser = new JapaneseAnalyzer(Version.LUCENE_36);

try (IndexWriter writer = new IndexWriter(directory,
new IndexWriterConfig(Version.LUCENE_36, analyser))) {
Document document = new Document();
document.add(new Field(content, blah blah
commercial blah blah \u79CB\u8449\u539F blah blah, Field.Store.NO,
Field.Index.ANALYZED));
writer.addDocument(document);
}

try (IndexReader reader = IndexReader.open(directory);
 TermEnum terms = reader.terms(new Term(content, ));
 TermPositions termPositions = reader.termPositions()) {
do {
Term term = terms.term();
if (term.field() != content) {
break;
}

System.out.println(term);
termPositions.seek(terms);

while (termPositions.next()) {
System.out.println(   + termPositions.doc());
int freq = termPositions.freq();
for (int i = 0; i  freq; i++) {
System.out.println( +
termPositions.nextPosition());
}
}
}
while (terms.next());

StandardQueryParser queryParser = new
StandardQueryParser(analyser);

queryParser.setDefaultOperator(StandardQueryConfigHandler.Operator.AND);
// quoted to work around strange behaviour of
StandardQueryParser treating this as a boolean query.
Query query =
queryParser.parse(\\u79CB\u8449\u539F\, content);
System.out.println(query);

TopDocs topDocs = new
IndexSearcher(reader).search(query, 10);
assertThat(topDocs.totalHits, is(1));
}
}
}
}
{code}

Lucene 4.9 reproduction:

{code:java}
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.ja.JapaneseAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.AtomicReader;
import 

[jira] [Commented] (LUCENE-5859) Remove Version from Analyzer constructors

2014-08-25 Thread Jun Ohtani (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108855#comment-14108855
 ] 

Jun Ohtani commented on LUCENE-5859:


I think this ticket is Fix Version 4.10 too, right?

 Remove Version from Analyzer constructors
 -

 Key: LUCENE-5859
 URL: https://issues.apache.org/jira/browse/LUCENE-5859
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Ryan Ernst
 Fix For: 5.0

 Attachments: LUCENE-5859.patch, LUCENE-5859_dead_code.patch


 This has always been a mess: analyzers are easy enough to make on your own, 
 we don't need to take responsibility for the users analysis chain for 2 
 major releases.
 The code maintenance is horrible here.
 This creates a huge usability issue too, and as seen from numerous mailing 
 list issues, users don't even understand how this versioning works anyway.
 I'm sure someone will whine if i try to remove these constants, but we can at 
 least make no-arg ctors forwarding to VERSION_CURRENT so that people who 
 don't care about back compat (e.g. just prototyping) don't have to deal with 
 the horribly complex versioning system.
 If you want to make the argument that doing this is trappy (i heard this 
 before), i think thats bogus, and ill counter by trying to remove them. 
 Either way, I'm personally not going to add any of this kind of back compat 
 logic myself ever again.
 Updated: description of the issue updated as expected. We should remove this 
 API completely. No one else on the planet has APIs that require a mandatory 
 version parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_20) - Build # 10970 - Still Failing!

2014-08-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10970/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 32630 lines...]
-check-forbidden-all:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.7
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.7
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.3
[forbidden-apis] Reading API signatures: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning for API signatures and dependencies...
[forbidden-apis] Forbidden method invocation: 
java.text.DecimalFormat#init(java.lang.String) [Uses default locale]
[forbidden-apis]   in org.apache.solr.update.UpdateLog$LogReplayer 
(UpdateLog.java:1274)
[forbidden-apis] Scanned  (and 1524 related) class file(s) for forbidden 
API invocations (in 1.67s), 1 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:485: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:73: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:271: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:477: 
Check for forbidden API calls failed, see log.

Total time: 103 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_20 -client 
-XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

RE: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_20) - Build # 10970 - Still Failing!

2014-08-25 Thread Uwe Schindler
I fixed this.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Monday, August 25, 2014 10:48 AM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_20) - Build # 10970 -
 Still Failing!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10970/
 Java: 32bit/jdk1.8.0_20 -client -XX:+UseParallelGC
 
 All tests passed
 
 Build Log:
 [...truncated 32630 lines...]
 -check-forbidden-all:
 [forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.7 [forbidden-
 apis] Reading bundled API signatures: jdk-deprecated-1.7 [forbidden-apis]
 Reading bundled API signatures: commons-io-unsafe-2.3 [forbidden-apis]
 Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/tools/forbiddenApis/base.txt
 [forbidden-apis] Reading API signatures:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/tools/forbiddenApis/servlet-api.txt
 [forbidden-apis] Loading classes to check...
 [forbidden-apis] Scanning for API signatures and dependencies...
 [forbidden-apis] Forbidden method invocation:
 java.text.DecimalFormat#init(java.lang.String) [Uses default locale]
 [forbidden-apis]   in org.apache.solr.update.UpdateLog$LogReplayer
 (UpdateLog.java:1274)
 [forbidden-apis] Scanned  (and 1524 related) class file(s) for forbidden
 API invocations (in 1.67s), 1 error(s).
 
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:485: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:73: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:271: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:477: Check for forbidden API calls failed, see log.
 
 Total time: 103 minutes 49 seconds
 Build step 'Invoke Ant' marked build as failure [description-setter]
 Description set: Java: 32bit/jdk1.8.0_20 -client -XX:+UseParallelGC Archiving
 artifacts Recording test results Email was triggered for: Failure - Any 
 Sending
 email for trigger: Failure - Any
 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [VOTE] 4.10.0 RC0

2014-08-25 Thread Uwe Schindler
Hi Ryan,

will you add the RELEASE_NOTES templates pages to the Lucene and Solr wiki? I 
would like to add the important note that security issues with Apache POI in 
Solr's contrib/extraction are resolved with this release. I already checked the 
artifacts manually that they really fix the Solr security issues with 
contrib/extraction, I also checked NOTICE.txt (and committed some changed to 
trunk/4.x). Now I am waiting for the smoker to finish!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Ryan Ernst [mailto:r...@iernst.net]
 Sent: Saturday, August 23, 2014 2:09 AM
 To: dev@lucene.apache.org
 Subject: [VOTE] 4.10.0 RC0
 
 Please vote for the first release candidate for Lucene/Solr 4.10.0.
 
 The artifacts can be downloaded here:
 http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-
 rev1619858
 
 Or you can run the smoker tester directly with this command line (assuming
 you have JAVA7_HOME set):
 python3.2 -u dev-tools/scripts/smokeTestRelease.py
 http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-
 rev1619858
 1619858 4.10.0 /tmp/smoke_test_4_10
 
 Please note, the RC number is starting at 0 because I used the sample
 command line in buildAndPushRelease.py.  If there is another release, I will
 jump to RC2 to avoid confusion (thus it would be the second
 RC).  I also plan to open an issue to clean up some things about
 buildAndPushRelease.py help (or lack there of).
 
  SUCCESS! [0:35:20.208893]
 Here is my +1
 
 Thanks,
 Ryan
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3191) field exclusion from fl

2014-08-25 Thread Kuntal Ganguly (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108961#comment-14108961
 ] 

Kuntal Ganguly commented on SOLR-3191:
--

[~ehatcher] Yeah i will upload the patch along with the test cases for 
transformer fix tomorrow.

 field exclusion from fl
 ---

 Key: SOLR-3191
 URL: https://issues.apache.org/jira/browse/SOLR-3191
 Project: Solr
  Issue Type: Improvement
Reporter: Luca Cavanna
Priority: Minor
 Attachments: SOLR-3191.patch, SOLR-3191.patch


 I think it would be useful to add a way to exclude field from the Solr 
 response. If I have for example 100 stored fields and I want to return all of 
 them but one, it would be handy to list just the field I want to exclude 
 instead of the 99 fields for inclusion through fl.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #684: POMs out of sync

2014-08-25 Thread Uwe Schindler
Hi Steve,

could this be caused by the user name change from hudson to jenkins? I copied 
~/.m2 but maybe there is missing something.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
 Sent: Monday, August 25, 2014 11:50 AM
 To: dev@lucene.apache.org
 Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #684: POMs out of sync
 
 Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/684/
 
 No tests ran.
 
 Build Log:
 [...truncated 25266 lines...]
 BUILD FAILED
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
 4.x/build.xml:501: The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
 4.x/build.xml:174: The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
 4.x/lucene/build.xml:492: The following error occurred while executing this
 line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
 4.x/lucene/common-build.xml:581: Error deploying artifact
 'org.apache.lucene:lucene-solr-grandparent:pom': Error retrieving previous
 build number for artifact 'org.apache.lucene:lucene-solr-grandparent:pom':
 repository metadata for: 'snapshot org.apache.lucene:lucene-solr-
 grandparent:4.11.0-SNAPSHOT' could not be retrieved from repository:
 apache.snapshots.https due to an error: Error transferring file: Server
 returned HTTP response code: 502 for URL:
 https://repository.apache.org/content/repositories/snapshots/org/apache/l
 ucene/lucene-solr-grandparent/4.11.0-SNAPSHOT/maven-metadata.xml
 
 Total time: 14 minutes 7 seconds
 Build step 'Invoke Ant' marked build as failure Recording test results Email
 was triggered for: Failure Sending email for trigger: Failure
 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108965#comment-14108965
 ] 

Michael McCandless commented on LUCENE-5904:


OK the EOFE is just because the very first commit is corrupt.

It happens with this seed because MDW throws an exc when IW is writing 
segments_1, and then IW tries to remove segments_1 and MDW throws another 
exception (new virus checker in this patch) and so a corrupt segments_1 is 
left.  If there were a prior commit, then at read time we would fall back to it.

So net/net I don't think there's anything to fix here, except +1 to just have 
the test make an empty first commit (before any MDW exceptions are enabled).

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5123) invert the codec postings API

2014-08-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108967#comment-14108967
 ] 

Michael McCandless commented on LUCENE-5123:


Thanks Rob!

 invert the codec postings API
 -

 Key: LUCENE-5123
 URL: https://issues.apache.org/jira/browse/LUCENE-5123
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Robert Muir
Assignee: Michael McCandless
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5123.patch, LUCENE-5123.patch, LUCENE-5123.patch, 
 LUCENE-5123.patch, LUCENE-5123.patch


 Currently FieldsConsumer/PostingsConsumer/etc is a push oriented api, e.g. 
 FreqProxTermsWriter streams the postings at flush, and the default merge() 
 takes the incoming codec api and filters out deleted docs and pushes via 
 same api (but that can be overridden).
 It could be cleaner if we allowed for a pull model instead (like 
 DocValues). For example, maybe FreqProxTermsWriter could expose a Terms of 
 itself and just passed this to the codec consumer.
 This would give the codec more flexibility to e.g. do multiple passes if it 
 wanted to do things like encode high-frequency terms more efficiently with a 
 bitset-like encoding or other things...
 A codec can try to do things like this to some extent today, but its very 
 difficult (look at buffering in Pulsing). We made this change with DV and it 
 made a lot of interesting optimizations easy to implement...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5268) Cutover more postings formats to the inverted pull API

2014-08-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108966#comment-14108966
 ] 

Michael McCandless commented on LUCENE-5268:


Thanks Rob!

 Cutover more postings formats to the inverted pull API
 

 Key: LUCENE-5268
 URL: https://issues.apache.org/jira/browse/LUCENE-5268
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5268.patch, LUCENE-5268.patch


 In LUCENE-5123, we added a new, more flexible, pull API for writing
 postings.  This API allows the postings format to iterate the
 fields/terms/postings more than once, and mirrors the API for writing
 doc values.
 But that was just the first step (only SimpleText was cutover to the
 new API).  I want to cutover more components, so we can (finally)
 e.g. play with different encodings depending on the term's postings,
 such as using a bitset for high freq DOCS_ONLY terms (LUCENE-5052).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #684: POMs out of sync

2014-08-25 Thread Uwe Schindler
Its unrelated to username change. I chatted with infra.

Others have same problem:
https://issues.apache.org/jira/browse/INFRA-7984

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Monday, August 25, 2014 11:52 AM
 To: dev@lucene.apache.org
 Cc: Steve Rowe
 Subject: RE: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #684: POMs out of
 sync
 
 Hi Steve,
 
 could this be caused by the user name change from hudson to jenkins? I
 copied ~/.m2 but maybe there is missing something.
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
  Sent: Monday, August 25, 2014 11:50 AM
  To: dev@lucene.apache.org
  Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #684: POMs out of sync
 
  Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/684/
 
  No tests ran.
 
  Build Log:
  [...truncated 25266 lines...]
  BUILD FAILED
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
  4.x/build.xml:501: The following error occurred while executing this line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
  4.x/build.xml:174: The following error occurred while executing this line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
  4.x/lucene/build.xml:492: The following error occurred while executing
  this
  line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
  4.x/lucene/common-build.xml:581: Error deploying artifact
  'org.apache.lucene:lucene-solr-grandparent:pom': Error retrieving
  previous build number for artifact 'org.apache.lucene:lucene-solr-
 grandparent:pom':
  repository metadata for: 'snapshot org.apache.lucene:lucene-solr-
  grandparent:4.11.0-SNAPSHOT' could not be retrieved from repository:
  apache.snapshots.https due to an error: Error transferring file:
  Server returned HTTP response code: 502 for URL:
  https://repository.apache.org/content/repositories/snapshots/org/apach
  e/l ucene/lucene-solr-grandparent/4.11.0-SNAPSHOT/maven-
 metadata.xml
 
  Total time: 14 minutes 7 seconds
  Build step 'Invoke Ant' marked build as failure Recording test results
  Email was triggered for: Failure Sending email for trigger: Failure
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_67) - Build # 4271 - Still Failing!

2014-08-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4271/
Java: 64bit/jdk1.7.0_67 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=59 closes=58

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=59 closes=58
at __randomizedtesting.SeedInfo.seed([3304B318A2BBFB21]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:439)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:186)
at sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11400 lines...]
   [junit4] Suite: org.apache.solr.core.TestLazyCores
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestLazyCores-3304B318A2BBFB21-001\init-core-data-001
   [junit4]   2 2742123 T7273 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 2742124 T7273 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 2742124 T7273 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\'
   [junit4]   2 2742127 T7273 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-trunk-Windows/solr/core/src/test-files/solr/collection1/lib/.svn/'
 to classloader
   [junit4]   2 2742127 T7273 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-trunk-Windows/solr/core/src/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 2742129 T7273 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-trunk-Windows/solr/core/src/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 2742265 T7273 oasc.SolrConfig.init Using Lucene 
MatchVersion: 5.0.0
   [junit4]   2 2742307 T7273 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig-minimal.xml
   [junit4]   2 2742307 T7273 oass.IndexSchema.readSchema Reading Solr Schema 
from schema-tiny.xml
   [junit4]   2 2742314 T7273 oass.IndexSchema.readSchema [null] Schema 
name=tiny
   [junit4]   2 2742333 T7273 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 2742334 T7273 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 2742334 T7273 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 

[jira] [Commented] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov commented on SOLR-6392:
--

[~dancollins]

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:28 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct


was (Author: imeleshkov):
[~dancollins]

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:31 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes affect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:30 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes affect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:32 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 4.10.0 RC0

2014-08-25 Thread Adrien Grand
+1

SUCCESS! [0:48:56.516899]

On Mon, Aug 25, 2014 at 11:49 AM, Uwe Schindler u...@thetaphi.de wrote:
 Hi Ryan,

 will you add the RELEASE_NOTES templates pages to the Lucene and Solr wiki? I 
 would like to add the important note that security issues with Apache POI in 
 Solr's contrib/extraction are resolved with this release. I already checked 
 the artifacts manually that they really fix the Solr security issues with 
 contrib/extraction, I also checked NOTICE.txt (and committed some changed to 
 trunk/4.x). Now I am waiting for the smoker to finish!

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Ryan Ernst [mailto:r...@iernst.net]
 Sent: Saturday, August 23, 2014 2:09 AM
 To: dev@lucene.apache.org
 Subject: [VOTE] 4.10.0 RC0

 Please vote for the first release candidate for Lucene/Solr 4.10.0.

 The artifacts can be downloaded here:
 http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-
 rev1619858

 Or you can run the smoker tester directly with this command line (assuming
 you have JAVA7_HOME set):
 python3.2 -u dev-tools/scripts/smokeTestRelease.py
 http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-
 rev1619858
 1619858 4.10.0 /tmp/smoke_test_4_10

 Please note, the RC number is starting at 0 because I used the sample
 command line in buildAndPushRelease.py.  If there is another release, I will
 jump to RC2 to avoid confusion (thus it would be the second
 RC).  I also plan to open an issue to clean up some things about
 buildAndPushRelease.py help (or lack there of).

  SUCCESS! [0:35:20.208893]
 Here is my +1

 Thanks,
 Ryan

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:46 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{code:BASH}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:48 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs

{code:BASH}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{code:BASH}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:54 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs

{code:BASH}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:58 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. And if I deliver configs for both 
collections and restart Solr/reload cores changes is not applied, that is 
unexpected behavior


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 10:59 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. And if I leter deliver configs for 
second collection and restart Solr/reload cores changes is not applied, that is 
unexpected behavior


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. And if I deliver configs for both 
collections and restart Solr/reload cores changes is not applied, that is 
unexpected behavior

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't 

[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 11:00 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. *And if I later deliver configs for 
second collection and restart Solr/reload cores changes is not applied, that is 
unexpected behavior*


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. And if I leter deliver configs for 
second collection and restart Solr/reload cores changes is not applied, that is 
unexpected behavior

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - 

org.apache.lucene.analysis.Analyzer

2014-08-25 Thread DVHV Sekhar
please add setter for below property, since its breaking my implementation that 
i used to achieve using Lucene 2.9 APIs:

private final ReuseStrategy reuseStrategy;


basically my reuse strategy is depending on external configuration, and i need 
to reset the strategy in the analyzer whenever configuration is changed.


PS: I'm working on a project to migrate the code that is using lucene 2.9 to 
new implementation using lucene 4.9.

Thanks,
Sekhar

RE: org.apache.lucene.analysis.Analyzer

2014-08-25 Thread Uwe Schindler
Hi,

 

This field is declared to be „final“, so a setter will not be added.

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de http://www.thetaphi.de/ 

eMail: u...@thetaphi.de

 

From: DVHV Sekhar [mailto:dvhv_sek...@yahoo.com.INVALID] 
Sent: Monday, August 25, 2014 1:07 PM
To: dev@lucene.apache.org
Subject: org.apache.lucene.analysis.Analyzer

 

please add setter for below property, since its breaking my implementation that 
i used to achieve using Lucene 2.9 APIs:

 

private final ReuseStrategy reuseStrategy;

 

basically my reuse strategy is depending on external configuration, and i need 
to reset the strategy in the analyzer whenever configuration is changed.

 

 

PS: I'm working on a project to migrate the code that is using lucene 2.9 to 
new implementation using lucene 4.9.

 

Thanks,

Sekhar



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_67) - Build # 10971 - Still Failing!

2014-08-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10971/
Java: 32bit/jdk1.7.0_67 -client -XX:+UseG1GC

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings

Error Message:
startOffset 409 expected:2136 but was:2137

Stack Trace:
java.lang.AssertionError: startOffset 409 expected:2136 but was:2137
at 
__randomizedtesting.SeedInfo.seed([FA7412EA22A7FE62:72FD125481A3A957]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:183)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:296)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:300)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:815)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:614)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:513)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:437)
at 
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 11:26 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. *And if I later deliver configs for 
second collection and restart Solr/reload cores changes is not applied, that is 
unexpected behavior*

The code you're referencing to most likely is 
{code:java|code:title=org.apache.solr.cloud.ZkController}
{code} 


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. *And if I later deliver configs for 
second collection and restart Solr/reload cores changes is not applied, that is 
unexpected behavior*

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one 

[jira] [Comment Edited] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Ilya Meleshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108994#comment-14108994
 ] 

Ilya Meleshkov edited comment on SOLR-6392 at 8/25/14 11:28 AM:


[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. *And if I later deliver configs for 
second collection and restart Solr/reload cores changes is not applied, that is 
unexpected behavior*

The code you're referencing to most likely is 
{code:title=org.apache.solr.cloud.ZkController.java}
if (configNames != null  configNames.size() == 1) {
// no config set named, but there is only 1 - use it
log.info(Only one config set found in zk - using it: + 
configNames.get(0));
collectionProps.put(CONFIGNAME_PROP,  configNames.get(0));
break;
  }
{code} 


was (Author: imeleshkov):
[~dancollins] you wrote:
{quote}
You have 2 collections which should be using independent configurations (both 
stored in ZK).
{quote}
correct

{quote}
If you change config1 (and restart Solr), that takes effect (in collection1 or 
both?)
{quote}
That takes an effect for both collections. I'm checking using 
http://solrhost/solr/#/collectionName/schema

{quote}
If you change config2 (and restart Solr), there is no apparent effect?
{quote}
Neither restart of Solr or reloading cores makes a difference. Second config is 
not applied.

{quote}
First question is then, are you sure both collections are using different 
configs, or have they somehow both picked up the same config?
{quote}
Since if I deliver configs for both collections and then start Solr it works 
fine, I make an assumptions that configurations are correct and collections use 
specific configs
{quote}
How did you set them up, and how did you define which config each collection 
uses?
{quote}
Configs delivered to ZK using following command
{code:none}
java -cp org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost ${zk.urls} -confdir 
%CONFIG_PATH% -confname ${solr.collection.name}
{code}
The {{solr.solr.home}} is configured manually and contains {{cores}} folder 
with subfolder for each collection. Subfolder names match collections names. 
Each collection subfolder has {{core.properties}} with following text:
{code:none}
name=collectionName
{code}

{quote}
There used to be a fall-back approach in Solr, if you started a core but 
didn't tell it to use any config from ZK AND there was only 1 possible config 
in ZK, the Solr guessed that was what you meant and set up the links.
{quote}

Ok, it looks to be very close to problem I have, but I explicitly specify 
collection names for both collections. *And if I later deliver configs for 
second collection and restart Solr/reload cores changes is not applied, that is 
unexpected behavior*

The code you're referencing to most likely is 
{code:java|code:title=org.apache.solr.cloud.ZkController}
{code} 

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: 

[jira] [Created] (SOLR-6430) Date sort order for null and dates 1970 is wrong

2014-08-25 Thread Alexander Block (JIRA)
Alexander Block created SOLR-6430:
-

 Summary: Date sort order for null and dates  1970 is wrong
 Key: SOLR-6430
 URL: https://issues.apache.org/jira/browse/SOLR-6430
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Alexander Block


I have a date field as follows:

field name=ETD type=date indexed=true stored=true multiValued=false 
/
...
fieldType name=date class=solr.TrieDateField precisionStep=0 
positionIncrementGap=0/

In my data set I have unset fields (null), dates which are pre EPOCH (e.g. 
1930-02-13T23:00:00Z) and dates which are post EPOCH (e.g. 
2000-12-31T23:00:00Z). When sorting in ascending order, I would expect the null 
fields to be considered the ones with the lowest values, for example:
null
1930-02-13T23:00:00Z
2000-12-31T23:00:00Z

What I get however is:
1930-02-13T23:00:00Z
null
2000-12-31T23:00:00Z

It looks like null is not handled as the lowest possible value for a date field.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [VOTE] 4.10.0 RC0

2014-08-25 Thread Uwe Schindler
Hi,

- Smoke tester is happy: SUCCESS! [2:08:29.941569]
- Solr contains the security fixes to Apache POI
- The Solr NOTICE.txt file contains some outdated garbage, I removed that 
yesterday in SVN. This should not hold release.

So +1 to release these artifacts!
Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Monday, August 25, 2014 11:50 AM
 To: dev@lucene.apache.org
 Subject: RE: [VOTE] 4.10.0 RC0
 
 Hi Ryan,
 
 will you add the RELEASE_NOTES templates pages to the Lucene and Solr
 wiki? I would like to add the important note that security issues with Apache
 POI in Solr's contrib/extraction are resolved with this release. I already
 checked the artifacts manually that they really fix the Solr security issues 
 with
 contrib/extraction, I also checked NOTICE.txt (and committed some changed
 to trunk/4.x). Now I am waiting for the smoker to finish!
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Ryan Ernst [mailto:r...@iernst.net]
  Sent: Saturday, August 23, 2014 2:09 AM
  To: dev@lucene.apache.org
  Subject: [VOTE] 4.10.0 RC0
 
  Please vote for the first release candidate for Lucene/Solr 4.10.0.
 
  The artifacts can be downloaded here:
  http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-
  rev1619858
 
  Or you can run the smoker tester directly with this command line
  (assuming you have JAVA7_HOME set):
  python3.2 -u dev-tools/scripts/smokeTestRelease.py
  http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-
  rev1619858
  1619858 4.10.0 /tmp/smoke_test_4_10
 
  Please note, the RC number is starting at 0 because I used the sample
  command line in buildAndPushRelease.py.  If there is another release,
  I will jump to RC2 to avoid confusion (thus it would be the second
  RC).  I also plan to open an issue to clean up some things about
  buildAndPushRelease.py help (or lack there of).
 
   SUCCESS! [0:35:20.208893]
  Here is my +1
 
  Thanks,
  Ryan
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6427) HdfsCollectionsAPIDistributedZkTest can fail because it tries to use an hdfs location with the file system based spellchecker which results in it trying to write to an i

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109049#comment-14109049
 ] 

ASF subversion and git services commented on SOLR-6427:
---

Commit 1620303 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1620303 ]

SOLR-6427: HdfsCollectionsAPIDistributedZkTest can fail because it tries to use 
an hdfs location with the file system based spellchecker which results in it 
trying to write to an illegal filesystem location.

 HdfsCollectionsAPIDistributedZkTest can fail because it tries to use an hdfs 
 location with the file system based spellchecker which results in it trying 
 to write to an illegal filesystem location.
 

 Key: SOLR-6427
 URL: https://issues.apache.org/jira/browse/SOLR-6427
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6427) HdfsCollectionsAPIDistributedZkTest can fail because it tries to use an hdfs location with the file system based spellchecker which results in it trying to write to an i

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109051#comment-14109051
 ] 

ASF subversion and git services commented on SOLR-6427:
---

Commit 1620304 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1620304 ]

SOLR-6427: HdfsCollectionsAPIDistributedZkTest can fail because it tries to use 
an hdfs location with the file system based spellchecker which results in it 
trying to write to an illegal filesystem location.

 HdfsCollectionsAPIDistributedZkTest can fail because it tries to use an hdfs 
 location with the file system based spellchecker which results in it trying 
 to write to an illegal filesystem location.
 

 Key: SOLR-6427
 URL: https://issues.apache.org/jira/browse/SOLR-6427
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6427) HdfsCollectionsAPIDistributedZkTest can fail because it tries to use an hdfs location with the file system based spellchecker which results in it trying to write to an il

2014-08-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6427.
---

   Resolution: Fixed
Fix Version/s: 4.11
   5.0

 HdfsCollectionsAPIDistributedZkTest can fail because it tries to use an hdfs 
 location with the file system based spellchecker which results in it trying 
 to write to an illegal filesystem location.
 

 Key: SOLR-6427
 URL: https://issues.apache.org/jira/browse/SOLR-6427
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.11






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6392) If run Solr having two collections configured but only one config delivered to Zookeeper causes that config is applied for all collections

2014-08-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109057#comment-14109057
 ] 

Mark Miller commented on SOLR-6392:
---

bq. And if I later deliver configs for second collection and restart 
Solr/reload cores changes is not applied, that is unexpected behavior

Use the zkcli link command after 'delivering' the second set of configs to link 
them to the collection.

 If run Solr having two collections configured but only one config delivered 
 to Zookeeper causes that config is applied for all collections
 --

 Key: SOLR-6392
 URL: https://issues.apache.org/jira/browse/SOLR-6392
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Ilya Meleshkov

 I have simplest Solr cloud configured locally. Thus I have single Solr and 
 Zookeeper nodes. 
 Steps to reproduce an error:
 # have stopped Solr+ZK with two collections
 # run ZK
 # deliver config to one collection only
 # run Solr - Solr running without any complains or errors
 # deliver config to second collection - doesn't have an effect
 But if I deliver configs for both collections before start Solr - it work 
 perfectly.
 So I would say that Solr should fail with meaningful error if there is no 
 config for some collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.8.0_20) - Build # 4178 - Still Failing!

2014-08-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/4178/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrIndexConfig

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestSolrIndexConfig-AD66895ACED5A028-001\tempDir-001\infostream.txt

C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestSolrIndexConfig-AD66895ACED5A028-001\tempDir-001

C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestSolrIndexConfig-AD66895ACED5A028-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestSolrIndexConfig-AD66895ACED5A028-001\tempDir-001\infostream.txt
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestSolrIndexConfig-AD66895ACED5A028-001\tempDir-001
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestSolrIndexConfig-AD66895ACED5A028-001

at __randomizedtesting.SeedInfo.seed([AD66895ACED5A028]:0)
at org.apache.lucene.util.TestUtil.rm(TestUtil.java:117)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:125)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11097 lines...]
   [junit4] Suite: org.apache.solr.core.TestSolrIndexConfig
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestSolrIndexConfig-AD66895ACED5A028-001\init-core-data-001
   [junit4]   2 481742 T1628 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 481747 T1628 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 481747 T1628 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\core\src\test-files\solr\collection1\'
   [junit4]   2 481749 T1628 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-4.x-Windows/solr/core/src/test-files/solr/collection1/lib/.svn/'
 to classloader
   [junit4]   2 481752 T1628 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-4.x-Windows/solr/core/src/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 481752 T1628 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-4.x-Windows/solr/core/src/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 481801 T1628 oasu.SolrIndexConfig.init WARN IndexWriter 
infoStream file log is enabled: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\temp\solr.core.TestSolrIndexConfig-AD66895ACED5A028-001\tempDir-001/infostream.txt
   [junit4]   2This feature is deprecated. Remove @file from 
infoStream to output messages to solr's logfile
   [junit4]   2 481805 T1628 oasc.SolrConfig.init Using Lucene MatchVersion: 
4.11.0
   [junit4]   2 481826 T1628 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig-indexconfig.xml
   [junit4]   2 481827 T1628 oass.IndexSchema.readSchema Reading Solr Schema 
from schema.xml
   [junit4]   2 481858 T1628 oass.IndexSchema.readSchema [null] Schema 
name=test
   [junit4]   2 481990 T1628 oass.ByteField.init WARN ByteField is deprecated 
and will be removed in 5.0. You should use TrieIntField instead.
   [junit4]   2 481990 T1628 oass.ShortField.init WARN ShortField is 
deprecated and will be removed in 5.0. You should use TrieIntField instead.
   

[jira] [Commented] (SOLR-5847) The Admin GUI doesn't allow to abort a running dataimport

2014-08-25 Thread Thomas Champagne (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109071#comment-14109071
 ] 

Thomas Champagne commented on SOLR-5847:


Can you put the fix version property ? I think it is fixed in Solr 4.10. 

 The Admin GUI doesn't allow to abort a running dataimport
 -

 Key: SOLR-5847
 URL: https://issues.apache.org/jira/browse/SOLR-5847
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler, web gui
Affects Versions: 4.7
Reporter: Paco Garcia
Assignee: Erik Hatcher
Priority: Minor

 With the changes introduced in 4.7.0 Release by SOLR-5517 (Return HTTP error 
 on POST requests with no Content-Type), the jquery invocation to abort a 
 running dataimport fails with HTTP error code 415.
 The method POST should have some content in the body
 See comments in SOLR-5517



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6428) Occasional OverseerTest#testOverseerFailure fail due to missing election node.

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109083#comment-14109083
 ] 

ASF subversion and git services commented on SOLR-6428:
---

Commit 1620319 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1620319 ]

SOLR-6428: Occasional OverseerTest#testOverseerFailure fail due to missing 
election node.
SOLR-5596: OverseerTest.testOverseerFailure - leader node already exists.

 Occasional OverseerTest#testOverseerFailure fail due to missing election node.
 --

 Key: SOLR-6428
 URL: https://issues.apache.org/jira/browse/SOLR-6428
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor

 {noformat}ERROR   4.32s J1 | OverseerTest.testOverseerFailure 
[junit4] Throwable #1: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /collections/collection1/leader_elect/shard1/election
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5596) OverseerTest.testOverseerFailure - leader node already exists.

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109084#comment-14109084
 ] 

ASF subversion and git services commented on SOLR-5596:
---

Commit 1620319 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1620319 ]

SOLR-6428: Occasional OverseerTest#testOverseerFailure fail due to missing 
election node.
SOLR-5596: OverseerTest.testOverseerFailure - leader node already exists.

 OverseerTest.testOverseerFailure - leader node already exists.
 --

 Key: SOLR-5596
 URL: https://issues.apache.org/jira/browse/SOLR-5596
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Shalin Shekhar Mangar
 Fix For: 4.9, 5.0


 Seeing this a bunch on jenkins - previous leader ephemeral node is still 
 around for some reason.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5596) OverseerTest.testOverseerFailure - leader node already exists.

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109086#comment-14109086
 ] 

ASF subversion and git services commented on SOLR-5596:
---

Commit 1620320 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1620320 ]

SOLR-6428: Occasional OverseerTest#testOverseerFailure fail due to missing 
election node.
SOLR-5596: OverseerTest.testOverseerFailure - leader node already exists.

 OverseerTest.testOverseerFailure - leader node already exists.
 --

 Key: SOLR-5596
 URL: https://issues.apache.org/jira/browse/SOLR-5596
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Shalin Shekhar Mangar
 Fix For: 4.9, 5.0


 Seeing this a bunch on jenkins - previous leader ephemeral node is still 
 around for some reason.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6428) Occasional OverseerTest#testOverseerFailure fail due to missing election node.

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109085#comment-14109085
 ] 

ASF subversion and git services commented on SOLR-6428:
---

Commit 1620320 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1620320 ]

SOLR-6428: Occasional OverseerTest#testOverseerFailure fail due to missing 
election node.
SOLR-5596: OverseerTest.testOverseerFailure - leader node already exists.

 Occasional OverseerTest#testOverseerFailure fail due to missing election node.
 --

 Key: SOLR-6428
 URL: https://issues.apache.org/jira/browse/SOLR-6428
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor

 {noformat}ERROR   4.32s J1 | OverseerTest.testOverseerFailure 
[junit4] Throwable #1: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /collections/collection1/leader_elect/shard1/election
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6426) SolrZkClient clean can fail due to a race with children nodes.

2014-08-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6426.
---

Resolution: Fixed

 SolrZkClient clean can fail due to a race with children nodes.
 --

 Key: SOLR-6426
 URL: https://issues.apache.org/jira/browse/SOLR-6426
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.11






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6425) If you using the new global hdfs block cache option, you can end up reading corrupt files on file name reuse.

2014-08-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6425.
---

   Resolution: Fixed
Fix Version/s: 4.11
   5.0

 If you using the new global hdfs block cache option, you can end up reading 
 corrupt files on file name reuse.
 -

 Key: SOLR-6425
 URL: https://issues.apache.org/jira/browse/SOLR-6425
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.11

 Attachments: SOLR-6425.patch


 Revealed by 'HdfsBasicDistributedZkTest frequently fails'.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6428) Occasional OverseerTest#testOverseerFailure fail due to missing election node.

2014-08-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6428.
---

   Resolution: Fixed
Fix Version/s: 4.11
   5.0

 Occasional OverseerTest#testOverseerFailure fail due to missing election node.
 --

 Key: SOLR-6428
 URL: https://issues.apache.org/jira/browse/SOLR-6428
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.11


 {noformat}ERROR   4.32s J1 | OverseerTest.testOverseerFailure 
[junit4] Throwable #1: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /collections/collection1/leader_elect/shard1/election
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6419) The ChaosMonkey tests should use fewers jetty instances on non nightly runs.

2014-08-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6419.
---

   Resolution: Fixed
Fix Version/s: 4.11
   5.0

 The ChaosMonkey tests should use fewers jetty instances on non nightly runs.
 

 Key: SOLR-6419
 URL: https://issues.apache.org/jira/browse/SOLR-6419
 Project: Solr
  Issue Type: Sub-task
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.11






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6414) Update to Hadoop 2.5.0

2014-08-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109106#comment-14109106
 ] 

Mark Miller commented on SOLR-6414:
---

Does not seem to be in maven central yet - I only see 2.4.1 right now.

 Update to Hadoop 2.5.0
 --

 Key: SOLR-6414
 URL: https://issues.apache.org/jira/browse/SOLR-6414
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.11






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5596) OverseerTest.testOverseerFailure - leader node already exists.

2014-08-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109107#comment-14109107
 ] 

Mark Miller commented on SOLR-5596:
---

Okay, now I think this will stop. We will see.

 OverseerTest.testOverseerFailure - leader node already exists.
 --

 Key: SOLR-5596
 URL: https://issues.apache.org/jira/browse/SOLR-5596
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Shalin Shekhar Mangar
 Fix For: 4.9, 5.0


 Seeing this a bunch on jenkins - previous leader ephemeral node is still 
 around for some reason.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5847) The Admin GUI doesn't allow to abort a running dataimport

2014-08-25 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-5847:
---

Fix Version/s: 4.10
   5.0

 The Admin GUI doesn't allow to abort a running dataimport
 -

 Key: SOLR-5847
 URL: https://issues.apache.org/jira/browse/SOLR-5847
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler, web gui
Affects Versions: 4.7
Reporter: Paco Garcia
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, 4.10


 With the changes introduced in 4.7.0 Release by SOLR-5517 (Return HTTP error 
 on POST requests with no Content-Type), the jquery invocation to abort a 
 running dataimport fails with HTTP error code 415.
 The method POST should have some content in the body
 See comments in SOLR-5517



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5904:


Attachment: LUCENE-5904.patch

Updated patch. I also fixed a few more false fails.

Still in general, there are interesting failures every time you run core tests 
with the patch. testThreadInterruptDeadlock got angry because write.lock 
couldn't be removed, need to investigate that deletion further.

I also havent looked at this:
{noformat}
   [junit4] Suite: org.apache.lucene.index.TestCodecHoldsOpenFiles
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestCodecHoldsOpenFiles -Dtests.method=test 
-Dtests.seed=1908AF7C5FA5D64A -Dtests.locale=sl -Dtests.timezone=Asia/Bangkok 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.01s J1 | TestCodecHoldsOpenFiles.test 
   [junit4] Throwable #1: java.io.FileNotFoundException: segments_1 in 
dir=RAMDirectory@25ac448 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@150c542d
   [junit4]at 
__randomizedtesting.SeedInfo.seed([1908AF7C5FA5D64A:915C90A6F159BBB2]:0)
   [junit4]at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:593)
   [junit4]at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:106)
   [junit4]at 
org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:347)
   [junit4]at 
org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:458)
   [junit4]at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:794)
   [junit4]at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:640)
   [junit4]at 
org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:454)
   [junit4]at 
org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:398)
{noformat}

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 605 - Still Failing

2014-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/605/

1 tests failed.
REGRESSION:  
org.apache.solr.handler.component.DistributedTermsComponentTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=17853, name=Thread-5233, 
state=RUNNABLE, group=TGRP-DistributedTermsComponentTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=17853, name=Thread-5233, state=RUNNABLE, 
group=TGRP-DistributedTermsComponentTest]
at 
__randomizedtesting.SeedInfo.seed([C93CB2E5A7E8F3E1:48DA3CFDD0B793DD]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:50500/bp_h/w
at __randomizedtesting.SeedInfo.seed([C93CB2E5A7E8F3E1]:0)
at 
org.apache.solr.BaseDistributedSearchTestCase$5.run(BaseDistributedSearchTestCase.java:582)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: https://127.0.0.1:50500/bp_h/w
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:558)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.BaseDistributedSearchTestCase$5.run(BaseDistributedSearchTestCase.java:577)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 5 more




Build Log:
[...truncated 12003 lines...]
   [junit4] Suite: 
org.apache.solr.handler.component.DistributedTermsComponentTest
   [junit4]   2 Creating dataDir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/build/solr-core/test/J1/./temp/solr.handler.component.DistributedTermsComponentTest-C93CB2E5A7E8F3E1-001/init-core-data-001
   [junit4]   2 2382404 T17241 oas.SolrTestCaseJ4.buildSSLConfig Randomized 
ssl (true) and clientAuth (true)
   [junit4]   2 2382404 T17241 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /bp_h/w
   [junit4]   2 2382411 T17241 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 2382415 T17241 oejs.Server.doStart 

[jira] [Updated] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5904:


Attachment: LUCENE-5904.patch

I think its just the long tail left now:
* testThreadInterruptDeadLock and write.lock
* testCodecHoldsOpenFiles and whatever is going on there
* replicator tests (i dont know if its just a test issue and virus scanner 
should be disabled)

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5859) Remove Version from Analyzer constructors

2014-08-25 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109148#comment-14109148
 ] 

Ryan Ernst commented on LUCENE-5859:


Yes, thanks.  I added 4.10 to Fix Versions.

 Remove Version from Analyzer constructors
 -

 Key: LUCENE-5859
 URL: https://issues.apache.org/jira/browse/LUCENE-5859
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Ryan Ernst
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5859.patch, LUCENE-5859_dead_code.patch


 This has always been a mess: analyzers are easy enough to make on your own, 
 we don't need to take responsibility for the users analysis chain for 2 
 major releases.
 The code maintenance is horrible here.
 This creates a huge usability issue too, and as seen from numerous mailing 
 list issues, users don't even understand how this versioning works anyway.
 I'm sure someone will whine if i try to remove these constants, but we can at 
 least make no-arg ctors forwarding to VERSION_CURRENT so that people who 
 don't care about back compat (e.g. just prototyping) don't have to deal with 
 the horribly complex versioning system.
 If you want to make the argument that doing this is trappy (i heard this 
 before), i think thats bogus, and ill counter by trying to remove them. 
 Either way, I'm personally not going to add any of this kind of back compat 
 logic myself ever again.
 Updated: description of the issue updated as expected. We should remove this 
 API completely. No one else on the planet has APIs that require a mandatory 
 version parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5904:


Attachment: LUCENE-5904.patch

I fixed the CodecHoldsOpenFiles test. It was buggy before: it might not even 
ever commit due to close randomization, and the checkIndex at the end would 
just do nothing.

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5859) Remove Version from Analyzer constructors

2014-08-25 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5859:
---

Fix Version/s: 4.10

 Remove Version from Analyzer constructors
 -

 Key: LUCENE-5859
 URL: https://issues.apache.org/jira/browse/LUCENE-5859
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Ryan Ernst
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5859.patch, LUCENE-5859_dead_code.patch


 This has always been a mess: analyzers are easy enough to make on your own, 
 we don't need to take responsibility for the users analysis chain for 2 
 major releases.
 The code maintenance is horrible here.
 This creates a huge usability issue too, and as seen from numerous mailing 
 list issues, users don't even understand how this versioning works anyway.
 I'm sure someone will whine if i try to remove these constants, but we can at 
 least make no-arg ctors forwarding to VERSION_CURRENT so that people who 
 don't care about back compat (e.g. just prototyping) don't have to deal with 
 the horribly complex versioning system.
 If you want to make the argument that doing this is trappy (i heard this 
 before), i think thats bogus, and ill counter by trying to remove them. 
 Either way, I'm personally not going to add any of this kind of back compat 
 logic myself ever again.
 Updated: description of the issue updated as expected. We should remove this 
 API completely. No one else on the planet has APIs that require a mandatory 
 version parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1192: POMs out of sync

2014-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1192/

No tests ran.

Build Log:
[...truncated 26167 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:501:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:174:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build.xml:498:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2100:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1537:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:580:
 Error deploying artifact 'org.apache.lucene:lucene-expressions:jar': Error 
deploying artifact: Failed to transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-expressions/5.0.0-SNAPSHOT/lucene-expressions-5.0.0-20140825.143150-6-javadoc.jar.sha1.
 Return code is: 502

Total time: 20 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6431) Test fail in FullSolrCloudDistribCmdsTest on nightly runs.

2014-08-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109179#comment-14109179
 ] 

Mark Miller commented on SOLR-6431:
---

{code}
Caused by: org.apache.http.ParseException: Invalid content type: 
at org.apache.http.entity.ContentType.parse(ContentType.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:496)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
... 47 more
{code}

 Test fail in FullSolrCloudDistribCmdsTest on nightly runs.
 --

 Key: SOLR-6431
 URL: https://issues.apache.org/jira/browse/SOLR-6431
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller

 {code}
 org.apache.solr.client.solrj.SolrServerException: Error executing query
   at 
 __randomizedtesting.SeedInfo.seed([4CDCFB52D83A47A0:CD3A754AAF65279C]:0)
   at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:100)
   at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
   at 
 org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:223)
   at 
 org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
   at 
 org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrServer(FullSolrCloudDistribCmdsTest.java:414)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6431) Test fail in FullSolrCloudDistribCmdsTest on nightly runs.

2014-08-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6431:
-

 Summary: Test fail in FullSolrCloudDistribCmdsTest on nightly runs.
 Key: SOLR-6431
 URL: https://issues.apache.org/jira/browse/SOLR-6431
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller


{code}
org.apache.solr.client.solrj.SolrServerException: Error executing query
at 
__randomizedtesting.SeedInfo.seed([4CDCFB52D83A47A0:CD3A754AAF65279C]:0)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:100)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:223)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrServer(FullSolrCloudDistribCmdsTest.java:414)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109190#comment-14109190
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1620340 from [~mikemccand] in branch 'dev/branches/lucene5904'
[ https://svn.apache.org/r1620340 ]

LUCENE-5904: make branch

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109193#comment-14109193
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1620342 from [~mikemccand] in branch 'dev/branches/lucene5904'
[ https://svn.apache.org/r1620342 ]

LUCENE-5904: MDW confesses when virus checker kicks in, if you run verbose

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109192#comment-14109192
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1620341 from [~mikemccand] in branch 'dev/branches/lucene5904'
[ https://svn.apache.org/r1620341 ]

LUCENE-5904: current patch

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109202#comment-14109202
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1620343 from [~mikemccand] in branch 'dev/branches/lucene5904'
[ https://svn.apache.org/r1620343 ]

LUCENE-5904: fix false test failure

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4815 - Still Failing

2014-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4815/

1 tests failed.
FAILED:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
startOffset must be non-negative, and endOffset must be = startOffset, 
startOffset=23,endOffset=22

Stack Trace:
java.lang.IllegalArgumentException: startOffset must be non-negative, and 
endOffset must be = startOffset, startOffset=23,endOffset=22
at 
__randomizedtesting.SeedInfo.seed([64A33F1CF1BBCBF9:EF8800DA8F5EB0A]:0)
at 
org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setOffset(PackedTokenAttributeImpl.java:107)
at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:68)
at 
org.apache.lucene.analysis.de.GermanLightStemFilter.incrementToken(GermanLightStemFilter.java:48)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:68)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:704)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:615)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:513)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:925)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-6431) Test fail in FullSolrCloudDistribCmdsTest on nightly runs.

2014-08-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109241#comment-14109241
 ] 

Mark Miller commented on SOLR-6431:
---

Fails about 1 in 10 for me.

 Test fail in FullSolrCloudDistribCmdsTest on nightly runs.
 --

 Key: SOLR-6431
 URL: https://issues.apache.org/jira/browse/SOLR-6431
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller

 {code}
 org.apache.solr.client.solrj.SolrServerException: Error executing query
   at 
 __randomizedtesting.SeedInfo.seed([4CDCFB52D83A47A0:CD3A754AAF65279C]:0)
   at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:100)
   at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
   at 
 org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:223)
   at 
 org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
   at 
 org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrServer(FullSolrCloudDistribCmdsTest.java:414)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1619630 - in /lucene/dev/branches/branch_4x: ./ build.xml

2014-08-25 Thread Chris Hostetter
: 
: Thanks Hoss, that would fix this one false failure ... but there were
: many false failures (I think from tabs?), but I didn't look very
: closely.

i didn't try it, i was just guessing the obvious problem based on your 
fast revert.

tabs could easily be fixed ... the no+commit and @+author issues are hte 
only things i can think of that would need a creative solution if you're 
interested in trying to add it back.


: 
: Mike McCandless
: 
: http://blog.mikemccandless.com
: 
: 
: On Fri, Aug 22, 2014 at 12:58 PM, Chris Hostetter
: hossman_luc...@fucit.org wrote:
: 
:  Mike,
: 
:  What about...
: 
:  !-- can't have no + commit in file or we'll fail our own check --
:  property name=at value=@
:  property name=no value=no
:  property name=commit value=commit
:  ...
:  fail if=validate.patternsFoundThe following files contain ${at}author 
tags, tabs, svn keywords or 
${no}${commit}:${line.separator}${validate.patternsFound}/fail
: 
: 
:  ?
: 
: 
:  : Date: Thu, 21 Aug 2014 23:44:17 -
:  : From: mikemcc...@apache.org
:  : Reply-To: dev@lucene.apache.org
:  : To: comm...@lucene.apache.org
:  : Subject: svn commit: r1619630 - in /lucene/dev/branches/branch_4x: ./
:  : build.xml
:  :
:  : Author: mikemccand
:  : Date: Thu Aug 21 23:44:16 2014
:  : New Revision: 1619630
:  :
:  : URL: http://svn.apache.org/r1619630
:  : Log:
:  : hmm, can't check .xml yet
:  :
:  : Modified:
:  : lucene/dev/branches/branch_4x/   (props changed)
:  : lucene/dev/branches/branch_4x/build.xml
:  :
:  : Modified: lucene/dev/branches/branch_4x/build.xml
:  : URL: 
http://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/build.xml?rev=1619630r1=1619629r2=1619630view=diff
:  : 
==
:  : --- lucene/dev/branches/branch_4x/build.xml (original)
:  : +++ lucene/dev/branches/branch_4x/build.xml Thu Aug 21 23:44:16 2014
:  : @@ -81,7 +81,6 @@
:  :fileset dir=${validate.currDir}
:  :  include name=**/*.java/
:  :  include name=**/*.py/
:  : -include name=**/*.xml/
:  :  exclude name=**/backwards/**/
:  :  or
:  :containsregexp expression=@author\b casesensitive=yes/
:  :
:  :
:  :
: 
:  -Hoss
:  http://www.lucidworks.com/
: 
:  -
:  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
:  For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6304) Transforming and Indexing custom JSON data

2014-08-25 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109381#comment-14109381
 ] 

Noble Paul commented on SOLR-6304:
--

[~ingorenner] 'debug' somehow suggested that it is actually doing indexing. I 
thought of 'dryrun' which  better describes the functionality but not as simple 
as a single word 'echo'

 Transforming and Indexing custom JSON data
 --

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.10

 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5899) shenandoah GC can cause ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum

2014-08-25 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-5899:
-

  Description: 
User report of bizare ClassCastException when running some lucene code with the 
(experimental) shenandoah GC

{code}
Exception in thread Lucene Merge Thread #0 
org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ClassCastException: 
org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
{code}

  was:
Exception in thread Lucene Merge Thread #0 
org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ClassCastException: 
org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)


 Priority: Minor  (was: Major)
  Environment: 
I test lucene on shenandoah with big memory + bigdata(32g, 128g).   
http://openjdk.java.net/jeps/189
http://icedtea.classpath.org/hg/shenandoah/

Fix Version/s: (was: 4.10)
  Summary: shenandoah GC can cause ClassCastException: 
org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
org.apache.lucene.index.DocsAndPositionsEnum  (was: Caused by: 
java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum)

I've updated the summary  description to make it more clear what circumstances 
this happens in.

Littlestar: it would be very helpful if you could provide some explicit details 
on what exactly you mean by test lucene ... is this test code you wrote? were 
you running ant test from the lucene distribution? ... which test threw this 
exception? does it reproduce reliably?

if you do open a JDK bug regarding this, please link to this issue, and then 
post the resulting bug # back here as a comment as well.

 shenandoah GC can cause ClassCastException: 
 org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
 org.apache.lucene.index.DocsAndPositionsEnum
 --

 Key: LUCENE-5899
 URL: https://issues.apache.org/jira/browse/LUCENE-5899
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/codecs
Affects Versions: 4.9
 Environment: I test lucene on shenandoah with big memory + 
 bigdata(32g, 128g).   
 http://openjdk.java.net/jeps/189
 http://icedtea.classpath.org/hg/shenandoah/
Reporter: Littlestar

[jira] [Commented] (SOLR-6390) Remove unnecessary checked exception for CloudSolrServer constructor

2014-08-25 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109580#comment-14109580
 ] 

Shawn Heisey commented on SOLR-6390:


Thanks, Steve!


 Remove unnecessary checked exception for CloudSolrServer constructor
 

 Key: SOLR-6390
 URL: https://issues.apache.org/jira/browse/SOLR-6390
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Davids
Assignee: Shawn Heisey
Priority: Trivial
 Fix For: 5.0, 4.11

 Attachments: SOLR-6390.patch, SOLR-6390.patch, SOLR-6390.patch, 
 SOLR-6390.patch


 The CloudSolrServer constructors can be simplified and can remove an 
 unnecessary checked exception for one of the 4 constructors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6390) Remove unnecessary checked exception for CloudSolrServer constructor

2014-08-25 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-6390.


Resolution: Fixed

 Remove unnecessary checked exception for CloudSolrServer constructor
 

 Key: SOLR-6390
 URL: https://issues.apache.org/jira/browse/SOLR-6390
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Davids
Assignee: Shawn Heisey
Priority: Trivial
 Fix For: 5.0, 4.11

 Attachments: SOLR-6390.patch, SOLR-6390.patch, SOLR-6390.patch, 
 SOLR-6390.patch


 The CloudSolrServer constructors can be simplified and can remove an 
 unnecessary checked exception for one of the 4 constructors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109618#comment-14109618
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1620418 from [~rcmuir] in branch 'dev/branches/lucene5904'
[ https://svn.apache.org/r1620418 ]

LUCENE-5904: fix false fails

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109632#comment-14109632
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1620421 from [~rcmuir] in branch 'dev/branches/lucene5904'
[ https://svn.apache.org/r1620421 ]

LUCENE-5904: fix false fail

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 4.10.0 RC0

2014-08-25 Thread Tomás Fernández Löbbe
+1

SUCCESS! [0:57:03.919594]


On Mon, Aug 25, 2014 at 4:54 AM, Uwe Schindler u...@thetaphi.de wrote:

 Hi,

 - Smoke tester is happy: SUCCESS! [2:08:29.941569]
 - Solr contains the security fixes to Apache POI
 - The Solr NOTICE.txt file contains some outdated garbage, I removed that
 yesterday in SVN. This should not hold release.

 So +1 to release these artifacts!
 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


  -Original Message-
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Monday, August 25, 2014 11:50 AM
  To: dev@lucene.apache.org
  Subject: RE: [VOTE] 4.10.0 RC0
 
  Hi Ryan,
 
  will you add the RELEASE_NOTES templates pages to the Lucene and Solr
  wiki? I would like to add the important note that security issues with
 Apache
  POI in Solr's contrib/extraction are resolved with this release. I
 already
  checked the artifacts manually that they really fix the Solr security
 issues with
  contrib/extraction, I also checked NOTICE.txt (and committed some changed
  to trunk/4.x). Now I am waiting for the smoker to finish!
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
   -Original Message-
   From: Ryan Ernst [mailto:r...@iernst.net]
   Sent: Saturday, August 23, 2014 2:09 AM
   To: dev@lucene.apache.org
   Subject: [VOTE] 4.10.0 RC0
  
   Please vote for the first release candidate for Lucene/Solr 4.10.0.
  
   The artifacts can be downloaded here:
   http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-
   rev1619858
  
   Or you can run the smoker tester directly with this command line
   (assuming you have JAVA7_HOME set):
   python3.2 -u dev-tools/scripts/smokeTestRelease.py
   http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-
   rev1619858
   1619858 4.10.0 /tmp/smoke_test_4_10
  
   Please note, the RC number is starting at 0 because I used the sample
   command line in buildAndPushRelease.py.  If there is another release,
   I will jump to RC2 to avoid confusion (thus it would be the second
   RC).  I also plan to open an issue to clean up some things about
   buildAndPushRelease.py help (or lack there of).
  
SUCCESS! [0:35:20.208893]
   Here is my +1
  
   Thanks,
   Ryan
  
   -
   To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
   additional commands, e-mail: dev-h...@lucene.apache.org
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
  commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109665#comment-14109665
 ] 

Robert Muir commented on LUCENE-5904:
-

Another real bug i think, in IndexFileDeleter (found by TestCrash)

IW crashes or something, and we have some leftover files (like _0.si, imagine 
from an initial empty commit).

when we bootup a new IW, it tries to delete the trash, but for some reason 
temporarily cannot delete _0.si. Then we go and flush real segment _0, only 
afterwards IFD comes back around and deletes _0.si, which is now a legit file, 
corrupting the index.

Its caused by the filename reuse problem (LUCENE-5903).

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 4.10.0 RC0

2014-08-25 Thread Mark Miller
+1

SUCCESS! [0:55:36.709648]

- Mark

http://about.me/markrmiller

 On Aug 22, 2014, at 8:08 PM, Ryan Ernst r...@iernst.net wrote:
 
 Please vote for the first release candidate for Lucene/Solr 4.10.0.
 
 The artifacts can be downloaded here:
 http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-rev1619858
 
 Or you can run the smoker tester directly with this command line
 (assuming you have JAVA7_HOME set):
 python3.2 -u dev-tools/scripts/smokeTestRelease.py
 http://people.apache.org/~rjernst/staging_area/lucene-solr-4.10.0-RC0-rev1619858
 1619858 4.10.0 /tmp/smoke_test_4_10
 
 Please note, the RC number is starting at 0 because I used the sample
 command line in buildAndPushRelease.py.  If there is another release,
 I will jump to RC2 to avoid confusion (thus it would be the second
 RC).  I also plan to open an issue to clean up some things about
 buildAndPushRelease.py help (or lack there of).
 
 SUCCESS! [0:35:20.208893]
 Here is my +1
 
 Thanks,
 Ryan
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Logging levels in Solr code

2014-08-25 Thread Ramkumar R. Aiyengar
I am in the process of looking at some of the ERROR level log output coming
from Solr to set up alarms, but currently the severity of what ERROR means
is kind of mixed across the code base. I am happy to fix this where I find,
but some guidance on what the various levels mean would be helpful. This is
what I would have expected:


   - ERROR: Shouldn't happen, indicates a bug or misconfiguration. Leads to
   loss of functionality or some operation failing. Any occurrence indicates
   something needs to be fixed.
   - WARN: Recoverable problem, might genuinely happen in rare cases. If it
   happens too often, might indicate an issue or misconfiguration. Bad input
   data could fall into this category, or should it be INFO?
   - INFO: Informational messages which would help in investigation,
   indicates expected behaviour.

Let me know if this is not accurate..


Re: Logging levels in Solr code

2014-08-25 Thread Mark Miller


 On Aug 25, 2014, at 4:44 PM, Ramkumar R. Aiyengar andyetitmo...@gmail.com 
 wrote:
 
 I am in the process of looking at some of the ERROR level log output coming 
 from Solr to set up alarms, but currently the severity of what ERROR means is 
 kind of mixed across the code base. I am happy to fix this where I find, but 
 some guidance on what the various levels mean would be helpful. This is what 
 I would have expected:
 
   • ERROR: Shouldn't happen, indicates a bug or misconfiguration. Leads 
 to loss of functionality or some operation failing. Any occurrence indicates 
 something needs to be fixed.
   • WARN: Recoverable problem, might genuinely happen in rare cases. If 
 it happens too often, might indicate an issue or misconfiguration. Bad input 
 data could fall into this category, or should it be INFO?
   • INFO: Informational messages which would help in investigation, 
 indicates expected behaviour.
 Let me know if this is not accurate..
 

Looks right overall. Which is not to say you won’t find an abuse here or there…

bq. Bad input data could fall into this category,

+1

I’ve been using more DEBUG as well. I think INFO should not spam (like our 
default successful add logging does) - it should just be useful always logged 
stuff to help with debugging and monitoring and operations.

DEBUG can be a bit more spammy and just whatever is useful if developing in 
that area.

- Mark

http://about.me/markrmiller
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109755#comment-14109755
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1620451 from [~mikemccand] in branch 'dev/branches/lucene5904'
[ https://svn.apache.org/r1620451 ]

LUCENE-5904: improve debuggability on fail

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Logging levels in Solr code

2014-08-25 Thread Ramkumar R. Aiyengar
Thanks Mark.

I am personally in favour of some record of any request sent to a server
being logged by default to help trace activity when something goes wrong
(which is something successful add logging indirectly achieves), but
unfortunately that currently includes internal distrib=false requests which
adds to the spam.

In any case, I will probably first start with ERROR and WAR...



On Mon, Aug 25, 2014 at 9:48 PM, Mark Miller markrmil...@gmail.com wrote:



  On Aug 25, 2014, at 4:44 PM, Ramkumar R. Aiyengar 
 andyetitmo...@gmail.com wrote:
 
  I am in the process of looking at some of the ERROR level log output
 coming from Solr to set up alarms, but currently the severity of what ERROR
 means is kind of mixed across the code base. I am happy to fix this where I
 find, but some guidance on what the various levels mean would be helpful.
 This is what I would have expected:
 
• ERROR: Shouldn't happen, indicates a bug or misconfiguration.
 Leads to loss of functionality or some operation failing. Any occurrence
 indicates something needs to be fixed.
• WARN: Recoverable problem, might genuinely happen in rare cases.
 If it happens too often, might indicate an issue or misconfiguration. Bad
 input data could fall into this category, or should it be INFO?
• INFO: Informational messages which would help in investigation,
 indicates expected behaviour.
  Let me know if this is not accurate..
 

 Looks right overall. Which is not to say you won’t find an abuse here or
 there…

 bq. Bad input data could fall into this category,

 +1

 I’ve been using more DEBUG as well. I think INFO should not spam (like our
 default successful add logging does) - it should just be useful always
 logged stuff to help with debugging and monitoring and operations.

 DEBUG can be a bit more spammy and just whatever is useful if developing
 in that area.

 - Mark

 http://about.me/markrmiller
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Not sent from my iPhone or my Blackberry or anyone else's


[jira] [Updated] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-08-25 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5879:
---

Attachment: LUCENE-5879.patch

New patch, resolving a few nocommits (but more remain!).  Tests pass.

I created a simple benchmark, generating random longs over the full
range, and random ranges as queries.  For each test, I build the index
once (10M longs), and then run all queries (10K random ranges) 10
times and record best time.  I also print out the number of terms
visited for the first query as a coarse measure of the prefix terms
density.

NF is numeric field and AP is auto-prefix. For AP I just index the
long values as 8 byte binary term, using the same logic in
NumericUtils.longToPrefixCodedBytes to make the binary sort the same
as the numeric sort.

Source code for benchmark is here:

  
https://code.google.com/a/apache-extras.org/p/luceneutil/source/browse/src/python/autoPrefixPerf.py

And here:

  
https://code.google.com/a/apache-extras.org/p/luceneutil/source/browse/src/main/perf/AutoPrefixPerf.java

Indexer uses single thread and SerialMergeScheduler to try to measure
the added indexing cost terms writing.

Currently the auto-prefix terms are configured like term blocks in
block tree, with a min/max number of items (terms, or child
auto-prefix terms) per auto-prefix term.

Here are the raw results ... net/net it looks like AP can match NF's
performance with a ~50% smaller index and slightly faster indexing
time.  The two approaches are quite different: NF uses TermsEnum.seek,
but AP uses Terms.intersect which for block tree does no seeking.

However, there's a non-trivial indexing time  space cost vs the
baseline... not sure we can/should just always enable this by default
for DOCS_ONLY fields ...

{noformat}
Baseline (8-byte binary terms like AP)
index sec 20.312 sec
index MB 123.4

NF precStep=4
index sec 225.06
index MB 1275.70
term count 560
search msec 6056
NF precStep=8
index sec 115.44
index MB 675.55
term count 2983
search msec 5547
NF precStep=12
index sec 80.77
index MB 470.96
term count 37405
search msec 6080
NF precStep=16 (default)
index sec 61.13
index MB 363.19
term count 125906
search msec 10466
AP min=5 max=8
index sec 194.21
index MB 272.06
term count 315
search msec 5715
AP min=5 max=12
index sec 179.86
index MB 256.88
term count 295
search msec 5771
AP min=5 max=16
index sec 168.91
index MB 254.32
term count 310
search msec 5727
AP min=5 max=20
index sec 157.48
index MB 252.04
term count 321
search msec 5742
AP min=5 max=2147483647
index sec 64.03
index MB 164.55
term count 3955
search msec 6168
AP min=10 max=18
index sec 106.52
index MB 215.26
term count 552
search msec 5792
AP min=10 max=27
index sec 99.00
index MB 212.45
term count 533
search msec 5814
AP min=10 max=36
index sec 88.45
index MB 207.43
term count 505
search msec 5850
AP min=10 max=45
index sec 79.15
index MB 194.73
term count 650
search msec 5681
AP min=10 max=2147483647
index sec 42.68
index MB 162.64
term count 6077
search msec 6199
AP min=15 max=28
index sec 84.83
index MB 204.29
term count 641
search msec 5763
AP min=15 max=42
index sec 74.20
index MB 193.24
term count 753
search msec 5828
AP min=15 max=56
index sec 63.69
index MB 190.06
term count 662
search msec 5839
AP min=15 max=70
index sec 62.53
index MB 185.96
term count 866
search msec 5846
AP min=15 max=2147483647
index sec 40.94
index MB 162.52
term count 6258
search msec 6156
AP min=20 max=38
index sec 69.26
index MB 192.11
term count 839
search msec 5837
AP min=20 max=57
index sec 60.75
index MB 186.18
term count 1034
search msec 5877
AP min=20 max=76
index sec 60.69
index MB 185.21
term count 980
search msec 5866
AP min=20 max=95
index sec 59.87
index MB 184.20
term count 985
search msec 5940
AP min=20 max=2147483647
index sec 41.64
index MB 162.52
term count 6258
search msec 6196
AP min=25 max=48
index sec 61.81
index MB 187.08
term count 806
search msec 5790
AP min=25 max=72
index sec 58.69
index MB 183.02
term count 929
search msec 5894
AP min=25 max=96
index sec 56.80
index MB 178.29
term count 841
search msec 5938
AP min=25 max=120
index sec 55.81
index MB 177.75
term count 1044
search msec 5883
AP min=25 max=2147483647
index sec 40.99
index MB 162.52
term count 6258
search msec 6189
AP min=30 max=58
index sec 56.99
index MB 182.91
term count 1012
search msec 5891
AP min=30 max=87
index sec 55.22
index MB 178.16
term count 1065

[jira] [Comment Edited] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-08-25 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094318#comment-14094318
 ] 

Noble Paul edited comment on SOLR-5473 at 8/25/14 9:49 PM:
---

bq. I don't mind that as an expert, unsupported override or something, but by 
and large I think this should be a system wide config, similar to legacyMode

+1


was (Author: noble.paul):
bq I don't mind that as an expert, unsupported override or something, but by 
and large I think this should be a system wide config, similar to legacyMode

+1

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: SolrCloud
 Fix For: 5.0, 4.10

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473_no_ui.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node and watches state changes 
 selectively.
 https://reviews.apache.org/r/24220/



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5966) Admin UI - menu is fixed, doesn't respect smaller viewports

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109820#comment-14109820
 ] 

ASF subversion and git services commented on SOLR-5966:
---

Commit 1620473 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1620473 ]

SOLR-5966: Admin UI Menu is fixed and doesn't respect smaller viewports

 Admin UI - menu is fixed, doesn't respect smaller viewports
 ---

 Key: SOLR-5966
 URL: https://issues.apache.org/jira/browse/SOLR-5966
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3, 4.4, 4.5, 4.6, 4.7, 4.8
 Environment: Operating system: windows 7 64-bit, hard disk - 320GB, 
 Memory - 3GB
Reporter: Aman Tandon
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-5966.patch


 I am a window 7 user, i am new in solr, i downloaded the setup for solr 4.7.1 
 and when i start the server and opened the admin interface using this url: 
 http://localhost:8983/solr/#/collection1, then I noticed that on selecting 
 the collection1 from cores  menu, I was unable to view the full list for 
 collection1.
 Please find this google doc link 
 https://drive.google.com/file/d/0B5GzwVkR3aDzNzJheHVmWFRFYzA/edit?usp=sharing 
 containing the screenshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5966) Admin UI - menu is fixed, doesn't respect smaller viewports

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109822#comment-14109822
 ] 

ASF subversion and git services commented on SOLR-5966:
---

Commit 1620474 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1620474 ]

SOLR-5966: Admin UI Menu is fixed and doesn't respect smaller viewports

 Admin UI - menu is fixed, doesn't respect smaller viewports
 ---

 Key: SOLR-5966
 URL: https://issues.apache.org/jira/browse/SOLR-5966
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3, 4.4, 4.5, 4.6, 4.7, 4.8
 Environment: Operating system: windows 7 64-bit, hard disk - 320GB, 
 Memory - 3GB
Reporter: Aman Tandon
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 5.0, 4.11

 Attachments: SOLR-5966.patch


 I am a window 7 user, i am new in solr, i downloaded the setup for solr 4.7.1 
 and when i start the server and opened the admin interface using this url: 
 http://localhost:8983/solr/#/collection1, then I noticed that on selecting 
 the collection1 from cores  menu, I was unable to view the full list for 
 collection1.
 Please find this google doc link 
 https://drive.google.com/file/d/0B5GzwVkR3aDzNzJheHVmWFRFYzA/edit?usp=sharing 
 containing the screenshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5966) Admin UI - menu is fixed, doesn't respect smaller viewports

2014-08-25 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109826#comment-14109826
 ] 

Shalin Shekhar Mangar edited comment on SOLR-5966 at 8/25/14 9:58 PM:
--

Thanks Aman and Stefan.


was (Author: shalinmangar):
Thanks Aman and Steffkes.

 Admin UI - menu is fixed, doesn't respect smaller viewports
 ---

 Key: SOLR-5966
 URL: https://issues.apache.org/jira/browse/SOLR-5966
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3, 4.4, 4.5, 4.6, 4.7, 4.8
 Environment: Operating system: windows 7 64-bit, hard disk - 320GB, 
 Memory - 3GB
Reporter: Aman Tandon
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 5.0, 4.11

 Attachments: SOLR-5966.patch


 I am a window 7 user, i am new in solr, i downloaded the setup for solr 4.7.1 
 and when i start the server and opened the admin interface using this url: 
 http://localhost:8983/solr/#/collection1, then I noticed that on selecting 
 the collection1 from cores  menu, I was unable to view the full list for 
 collection1.
 Please find this google doc link 
 https://drive.google.com/file/d/0B5GzwVkR3aDzNzJheHVmWFRFYzA/edit?usp=sharing 
 containing the screenshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5966) Admin UI - menu is fixed, doesn't respect smaller viewports

2014-08-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5966.
-

   Resolution: Fixed
Fix Version/s: (was: 4.9)
   4.11

Thanks Aman and Steffkes.

 Admin UI - menu is fixed, doesn't respect smaller viewports
 ---

 Key: SOLR-5966
 URL: https://issues.apache.org/jira/browse/SOLR-5966
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3, 4.4, 4.5, 4.6, 4.7, 4.8
 Environment: Operating system: windows 7 64-bit, hard disk - 320GB, 
 Memory - 3GB
Reporter: Aman Tandon
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 5.0, 4.11

 Attachments: SOLR-5966.patch


 I am a window 7 user, i am new in solr, i downloaded the setup for solr 4.7.1 
 and when i start the server and opened the admin interface using this url: 
 http://localhost:8983/solr/#/collection1, then I noticed that on selecting 
 the collection1 from cores  menu, I was unable to view the full list for 
 collection1.
 Please find this google doc link 
 https://drive.google.com/file/d/0B5GzwVkR3aDzNzJheHVmWFRFYzA/edit?usp=sharing 
 containing the screenshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Logging levels in Solr code

2014-08-25 Thread Mark Miller


 On Aug 25, 2014, at 5:21 PM, Ramkumar R. Aiyengar andyetitmo...@gmail.com 
 wrote:
 
 I am personally in favour of some record of any request sent to a server 
 being logged by default to help trace activity 

Certainly you should have the option to turn it on, but I don’t think it makes 
a great default. I don’t think the standard user will find it that useful and 
it will flood logs, making finding other useful information more difficult and 
ballooning retention requirements so that you don’t lose relevant logs. When 
you batch or stream, it also only logs a subset of the adds by default.

- Mark

http://about.me/markrmiller
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-25 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109859#comment-14109859
 ] 

Noble Paul commented on SOLR-6365:
--

I'm going with the legacy solr way of doing this



{code:xml}
!-- use json for all paths and _txt as the default search field--
params id=global path=/**
  lst name=defaults
 str name=wtjson/str
 str name=df_txt/str
  /lst
/params
{code}


The feature is more important than the syntax itself

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6432) ant example shouldn't create 'bin' directory inside example/solr/

2014-08-25 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-6432:
--

 Summary: ant example shouldn't create 'bin' directory inside 
example/solr/
 Key: SOLR-6432
 URL: https://issues.apache.org/jira/browse/SOLR-6432
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10
Reporter: Anshum Gupta


'ant example' creates an empty directory which might confuse users.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6432) ant example shouldn't create 'bin' directory inside example/solr/

2014-08-25 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6432:
---

Priority: Minor  (was: Major)

 ant example shouldn't create 'bin' directory inside example/solr/
 -

 Key: SOLR-6432
 URL: https://issues.apache.org/jira/browse/SOLR-6432
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10
Reporter: Anshum Gupta
Priority: Minor

 'ant example' creates an empty directory which might confuse users.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6433) Solr startup scripts should be executable

2014-08-25 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-6433:
--

 Summary: Solr startup scripts should be executable
 Key: SOLR-6433
 URL: https://issues.apache.org/jira/browse/SOLR-6433
 Project: Solr
  Issue Type: Bug
Reporter: Anshum Gupta
Assignee: Anshum Gupta


bin/* scripts in solr should be executable in the source.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6433) Solr startup scripts should be executable

2014-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109901#comment-14109901
 ] 

ASF subversion and git services commented on SOLR-6433:
---

Commit 1620478 from [~anshumg] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1620478 ]

SOLR-6433: Solr startup scripts should be executable

 Solr startup scripts should be executable
 -

 Key: SOLR-6433
 URL: https://issues.apache.org/jira/browse/SOLR-6433
 Project: Solr
  Issue Type: Bug
Reporter: Anshum Gupta
Assignee: Anshum Gupta

 bin/* scripts in solr should be executable in the source.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6433) Solr startup scripts should be executable

2014-08-25 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-6433.


Resolution: Fixed

The trunk had this just fine. Fixed it on 4x.

 Solr startup scripts should be executable
 -

 Key: SOLR-6433
 URL: https://issues.apache.org/jira/browse/SOLR-6433
 Project: Solr
  Issue Type: Bug
Reporter: Anshum Gupta
Assignee: Anshum Gupta

 bin/* scripts in solr should be executable in the source.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6434) Solr startup script improvements

2014-08-25 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-6434:
--

 Summary: Solr startup script improvements
 Key: SOLR-6434
 URL: https://issues.apache.org/jira/browse/SOLR-6434
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10
Reporter: Erik Hatcher
Priority: Critical
 Fix For: 5.0, 4.11


The startup scripts are new and evolving.  This issue is to capture a handful 
of minor improvements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6434) Solr startup script improvements

2014-08-25 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109952#comment-14109952
 ] 

Erik Hatcher commented on SOLR-6434:


bin/solr line #692: s/remove/remote

 Solr startup script improvements
 

 Key: SOLR-6434
 URL: https://issues.apache.org/jira/browse/SOLR-6434
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10
Reporter: Erik Hatcher
Priority: Critical
 Fix For: 5.0, 4.11


 The startup scripts are new and evolving.  This issue is to capture a handful 
 of minor improvements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6434) Solr startup script improvements

2014-08-25 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109953#comment-14109953
 ] 

Erik Hatcher commented on SOLR-6434:


Let's create the console log files when running in the background to ./logs 
rather than ./bin.

 Solr startup script improvements
 

 Key: SOLR-6434
 URL: https://issues.apache.org/jira/browse/SOLR-6434
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10
Reporter: Erik Hatcher
Priority: Critical
 Fix For: 5.0, 4.11


 The startup scripts are new and evolving.  This issue is to capture a handful 
 of minor improvements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >