[jira] [Created] (SOLR-3991) SOLR stuck on initialization with warmup and spellcheck collation on for /select handler

2012-10-25 Thread Alexey Kudinov (JIRA)
Alexey Kudinov created SOLR-3991:


 Summary: SOLR stuck on initialization with warmup and spellcheck 
collation on for /select handler
 Key: SOLR-3991
 URL: https://issues.apache.org/jira/browse/SOLR-3991
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, spellchecker
Affects Versions: 4.0
 Environment: Windows 7/Tomcat 6
Reporter: Alexey Kudinov


The main thread calls replicationhandler getStatistics() which in turn tries to 
get searcher and waits. In the meanwhile, warmup is triggered and query runs. 
If spell check is defined for query component, and collation is on, collation 
executor also tries to fetch the searcher and creates a deadlock.

To replay:
1. Define the warmup query
2. Add spell checker configuration to the /select search handler
3. Set spellcheck.collation = true 

Configuration:
zkRun
collection1
2 shards
1 node
4 cores

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_07) - Build # 1296 - Still Failing!

2012-10-25 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1296/
Java: 32bit/jdk1.7.0_07 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 24581 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:60: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:235: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1577:
 java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at 
java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
at java.io.BufferedWriter.write(BufferedWriter.java:230)
at java.io.PrintWriter.write(PrintWriter.java:456)
at java.io.PrintWriter.write(PrintWriter.java:473)
at java.io.PrintWriter.print(PrintWriter.java:603)
at java.io.PrintWriter.println(PrintWriter.java:739)
at org.w3c.tidy.Report.printMessage(Report.java:754)
at org.w3c.tidy.Report.errorSummary(Report.java:1572)
at org.w3c.tidy.Tidy.parse(Tidy.java:608)
at org.w3c.tidy.Tidy.parse(Tidy.java:263)
at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)

Total time: 48 minutes 21 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.7.0_07 -server -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2012-10-25 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13483953#comment-13483953
 ] 

Simon Willnauer commented on LUCENE-2878:
-

+1 to the renaming. I still think we should document the actual used algorithm 
(ie. for BrouweianQuery) with references to the paper though. 
Please go ahead and add this. I will need to bring this branch up-to-date, will 
do once you committed these changes.

 Allow Scorer to expose positions and payloads aka. nuke spans 
 --

 Key: LUCENE-2878
 URL: https://issues.apache.org/jira/browse/LUCENE-2878
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: Positions Branch
Reporter: Simon Willnauer
Assignee: Simon Willnauer
  Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, 
 mentor
 Fix For: Positions Branch

 Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, PosHighlighter.patch, 
 PosHighlighter.patch


 Currently we have two somewhat separate types of queries, the one which can 
 make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
 doesn't really do scoring comparable to what other queries do and at the end 
 of the day they are duplicating lot of code all over lucene. Span*Queries are 
 also limited to other Span*Query instances such that you can not use a 
 TermQuery or a BooleanQuery with SpanNear or anthing like that. 
 Beside of the Span*Query limitation other queries lacking a quiet interesting 
 feature since they can not score based on term proximity since scores doesn't 
 expose any positional information. All those problems bugged me for a while 
 now so I stared working on that using the bulkpostings API. I would have done 
 that first cut on trunk but TermScorer is working on BlockReader that do not 
 expose positions while the one in this branch does. I started adding a new 
 Positions class which users can pull from a scorer, to prevent unnecessary 
 positions enums I added ScorerContext#needsPositions and eventually 
 Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
 currently only TermQuery / TermScorer implements this API and other simply 
 return null instead. 
 To show that the API really works and our BulkPostings work fine too with 
 positions I cut over TermSpanQuery to use a TermScorer under the hood and 
 nuked TermSpans entirely. A nice sideeffect of this was that the Position 
 BulkReading implementation got some exercise which now :) work all with 
 positions while Payloads for bulkreading are kind of experimental in the 
 patch and those only work with Standard codec. 
 So all spans now work on top of TermScorer ( I truly hate spans since today ) 
 including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
 to implement the other codecs yet since I want to get feedback on the API and 
 on this first cut before I go one with it. I will upload the corresponding 
 patch in a minute. 
 I also had to cut over SpanQuery.getSpans(IR) to 
 SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
 first but after that pain today I need a break first :).
 The patch passes all core tests 
 (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
 look into the MemoryIndex BulkPostings API yet)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1294 - Failure!

2012-10-25 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1294/
Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 24415 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:60: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\build.xml:235: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\common-build.xml:1577:
 java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at 
java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
at java.io.BufferedWriter.write(BufferedWriter.java:230)
at java.io.PrintWriter.write(PrintWriter.java:456)
at java.io.PrintWriter.write(PrintWriter.java:473)
at java.io.PrintWriter.print(PrintWriter.java:603)
at java.io.PrintWriter.println(PrintWriter.java:739)
at org.w3c.tidy.Report.printMessage(Report.java:754)
at org.w3c.tidy.Report.attrError(Report.java:1171)
at org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
at org.w3c.tidy.Node.checkAttributes(Node.java:343)
at org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
at org.w3c.tidy.Tidy.parse(Tidy.java:433)
at org.w3c.tidy.Tidy.parse(Tidy.java:263)
at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)

Total time: 46 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2012-10-25 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13483976#comment-13483976
 ] 

Alan Woodward commented on LUCENE-2878:
---

OK, done, added some more javadocs as well.  Next cleanup is to make the 
distinction between iterators and filters a bit more explicit, I think.  We've 
got some iterators that also act as filters, and some which are distinct.  I 
think they should all be separate classes - filters are a public API that 
clients can use to create queries, whereas Iterators are an implementation 
detail.

 Allow Scorer to expose positions and payloads aka. nuke spans 
 --

 Key: LUCENE-2878
 URL: https://issues.apache.org/jira/browse/LUCENE-2878
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: Positions Branch
Reporter: Simon Willnauer
Assignee: Simon Willnauer
  Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, 
 mentor
 Fix For: Positions Branch

 Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, PosHighlighter.patch, 
 PosHighlighter.patch


 Currently we have two somewhat separate types of queries, the one which can 
 make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
 doesn't really do scoring comparable to what other queries do and at the end 
 of the day they are duplicating lot of code all over lucene. Span*Queries are 
 also limited to other Span*Query instances such that you can not use a 
 TermQuery or a BooleanQuery with SpanNear or anthing like that. 
 Beside of the Span*Query limitation other queries lacking a quiet interesting 
 feature since they can not score based on term proximity since scores doesn't 
 expose any positional information. All those problems bugged me for a while 
 now so I stared working on that using the bulkpostings API. I would have done 
 that first cut on trunk but TermScorer is working on BlockReader that do not 
 expose positions while the one in this branch does. I started adding a new 
 Positions class which users can pull from a scorer, to prevent unnecessary 
 positions enums I added ScorerContext#needsPositions and eventually 
 Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
 currently only TermQuery / TermScorer implements this API and other simply 
 return null instead. 
 To show that the API really works and our BulkPostings work fine too with 
 positions I cut over TermSpanQuery to use a TermScorer under the hood and 
 nuked TermSpans entirely. A nice sideeffect of this was that the Position 
 BulkReading implementation got some exercise which now :) work all with 
 positions while Payloads for bulkreading are kind of experimental in the 
 patch and those only work with Standard codec. 
 So all spans now work on top of TermScorer ( I truly hate spans since today ) 
 including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
 to implement the other codecs yet since I want to get feedback on the API and 
 on this first cut before I go one with it. I will upload the corresponding 
 patch in a minute. 
 I also had to cut over SpanQuery.getSpans(IR) to 
 SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
 first but after that pain today I need a break first :).
 The patch passes all core tests 
 (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
 look into the MemoryIndex BulkPostings API yet)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile

2012-10-25 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13483981#comment-13483981
 ] 

Alan Woodward commented on SOLR-1972:
-

Thanks for testing it Shawn.

I think I'd like some more eyes on it before I commit - it adds a dependency to 
solr-core, which is a pretty big change.  Anyone else have an opinion?

 Need additional query stats in admin interface - median, 95th and 99th 
 percentile
 -

 Key: SOLR-1972
 URL: https://issues.apache.org/jira/browse/SOLR-1972
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 1.4
Reporter: Shawn Heisey
Assignee: Alan Woodward
Priority: Minor
 Fix For: 4.1

 Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, 
 elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, 
 SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, 
 SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
 SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
 SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, 
 SOLR-1972-url_pattern.patch


 I would like to see more detailed query statistics from the admin GUI.  This 
 is what you can get now:
 requests : 809
 errors : 0
 timeouts : 0
 totalTime : 70053
 avgTimePerRequest : 86.59209
 avgRequestsPerSecond : 0.8148785 
 I'd like to see more data on the time per request - median, 95th percentile, 
 99th percentile, and any other statistical function that makes sense to 
 include.  In my environment, the first bunch of queries after startup tend to 
 take several seconds each.  I find that the average value tends to be useless 
 until it has several thousand queries under its belt and the caches are 
 thoroughly warmed.  The statistical functions I have mentioned would quickly 
 eliminate the influence of those initial slow queries.
 The system will have to store individual data about each query.  I don't know 
 if this is something Solr does already.  It would be nice to have a 
 configurable count of how many of the most recent data points are kept, to 
 control the amount of memory the feature uses.  The default value could be 
 something like 1024 or 4096.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3989) RuntimeException thrown by SolrZkClient should wrap cause, have a message, or be SolrException

2012-10-25 Thread Audun Wilhelmsen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484051#comment-13484051
 ] 

Audun Wilhelmsen commented on SOLR-3989:


I see that the constructor dont wrap cause, fix to this is to wrap it like 
this: throw new RuntimeException(e);

My trunk code is like this: 

public SolrZkClient(String zkServerAddress, int zkClientTimeout, 
ZkClientConnectionStrategy strat, final OnReconnect onReconnect, int 
clientConnectTimeout) {
  ...
  try {
...
  } catch (Throwable e) {
...
throw new RuntimeException();
  }
  ...
}


 RuntimeException thrown by SolrZkClient should wrap cause, have a message, or 
 be SolrException
 --

 Key: SOLR-3989
 URL: https://issues.apache.org/jira/browse/SOLR-3989
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: Colin Bartolome

 In a few spots, but notably in the constructor for SolrZkClient, a try-catch 
 block will catch Throwable and throw a new RuntimeException with no cause or 
 message. Either the RuntimeException should wrap the Throwable that was 
 caught, some sort of message should be added, or the type of the exception 
 should be changed to SolrException so calling code can catch these exceptions 
 without casting too broad of a net.
 Reproduce this by creating a CloudSolrServer that points to a URL that is 
 valid, but has no server running:
 CloudSolrServer server = new CloudSolrServer(localhost:9983);
 server.connect();

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1296 - Failure!

2012-10-25 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1296/
Java: 32bit/jdk1.7.0_07 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 24421 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:60: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\build.xml:235: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\common-build.xml:1577:
 java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at 
java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
at java.io.BufferedWriter.write(BufferedWriter.java:230)
at java.io.PrintWriter.write(PrintWriter.java:456)
at java.io.PrintWriter.write(PrintWriter.java:473)
at java.io.PrintWriter.print(PrintWriter.java:603)
at java.io.PrintWriter.println(PrintWriter.java:739)
at org.w3c.tidy.Report.printMessage(Report.java:754)
at org.w3c.tidy.Report.attrError(Report.java:1171)
at org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
at org.w3c.tidy.Node.checkAttributes(Node.java:343)
at org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
at org.w3c.tidy.Tidy.parse(Tidy.java:433)
at org.w3c.tidy.Tidy.parse(Tidy.java:263)
at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)

Total time: 51 minutes 42 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.7.0_07 -client -XX:+UseSerialGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-3992) QuerySenderListener doesn't populate document cache

2012-10-25 Thread Shotaro Kamio (JIRA)
Shotaro Kamio created SOLR-3992:
---

 Summary: QuerySenderListener doesn't populate document cache
 Key: SOLR-3992
 URL: https://issues.apache.org/jira/browse/SOLR-3992
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0
Reporter: Shotaro Kamio


QuerySenderListner class can be used to populate cache on startup of solr 
(firstSearcher event). It populates caches. The code looks trying to populate 
document cache also. But it doesn't.

{code}
NamedList values = rsp.getValues();
for (int i=0; ivalues.size(); i++) {
  Object o = values.getVal(i);
  if (o instanceof DocList) {
{code}

It is because value of response object uses ResultContext object to store 
document list, not DocList object.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3992) QuerySenderListener doesn't populate document cache

2012-10-25 Thread Shotaro Kamio (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shotaro Kamio updated SOLR-3992:


Description: 
QuerySenderListner class can be used to populate cache on startup of solr 
(firstSearcher event). The code looks trying to populate document cache also. 
But it doesn't.

{code}
NamedList values = rsp.getValues();
for (int i=0; ivalues.size(); i++) {
  Object o = values.getVal(i);
  if (o instanceof DocList) {
{code}

It is because value of response object uses ResultContext object to store 
document list, not DocList object.


  was:
QuerySenderListner class can be used to populate cache on startup of solr 
(firstSearcher event). It populates caches. The code looks trying to populate 
document cache also. But it doesn't.

{code}
NamedList values = rsp.getValues();
for (int i=0; ivalues.size(); i++) {
  Object o = values.getVal(i);
  if (o instanceof DocList) {
{code}

It is because value of response object uses ResultContext object to store 
document list, not DocList object.



 QuerySenderListener doesn't populate document cache
 ---

 Key: SOLR-3992
 URL: https://issues.apache.org/jira/browse/SOLR-3992
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0
Reporter: Shotaro Kamio

 QuerySenderListner class can be used to populate cache on startup of solr 
 (firstSearcher event). The code looks trying to populate document cache also. 
 But it doesn't.
 {code}
 NamedList values = rsp.getValues();
 for (int i=0; ivalues.size(); i++) {
   Object o = values.getVal(i);
   if (o instanceof DocList) {
 {code}
 It is because value of response object uses ResultContext object to store 
 document list, not DocList object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2012-10-25 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484107#comment-13484107
 ] 

Simon Willnauer commented on LUCENE-2878:
-

Alan, I merged up with trunk and fixed some small bugs. +1 to all the cleanups

 Allow Scorer to expose positions and payloads aka. nuke spans 
 --

 Key: LUCENE-2878
 URL: https://issues.apache.org/jira/browse/LUCENE-2878
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: Positions Branch
Reporter: Simon Willnauer
Assignee: Simon Willnauer
  Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, 
 mentor
 Fix For: Positions Branch

 Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
 LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, PosHighlighter.patch, 
 PosHighlighter.patch


 Currently we have two somewhat separate types of queries, the one which can 
 make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
 doesn't really do scoring comparable to what other queries do and at the end 
 of the day they are duplicating lot of code all over lucene. Span*Queries are 
 also limited to other Span*Query instances such that you can not use a 
 TermQuery or a BooleanQuery with SpanNear or anthing like that. 
 Beside of the Span*Query limitation other queries lacking a quiet interesting 
 feature since they can not score based on term proximity since scores doesn't 
 expose any positional information. All those problems bugged me for a while 
 now so I stared working on that using the bulkpostings API. I would have done 
 that first cut on trunk but TermScorer is working on BlockReader that do not 
 expose positions while the one in this branch does. I started adding a new 
 Positions class which users can pull from a scorer, to prevent unnecessary 
 positions enums I added ScorerContext#needsPositions and eventually 
 Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
 currently only TermQuery / TermScorer implements this API and other simply 
 return null instead. 
 To show that the API really works and our BulkPostings work fine too with 
 positions I cut over TermSpanQuery to use a TermScorer under the hood and 
 nuked TermSpans entirely. A nice sideeffect of this was that the Position 
 BulkReading implementation got some exercise which now :) work all with 
 positions while Payloads for bulkreading are kind of experimental in the 
 patch and those only work with Standard codec. 
 So all spans now work on top of TermScorer ( I truly hate spans since today ) 
 including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
 to implement the other codecs yet since I want to get feedback on the API and 
 on this first cut before I go one with it. I will upload the corresponding 
 patch in a minute. 
 I also had to cut over SpanQuery.getSpans(IR) to 
 SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
 first but after that pain today I need a break first :).
 The patch passes all core tests 
 (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
 look into the MemoryIndex BulkPostings API yet)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1297/
Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 24417 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:60: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\build.xml:235: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\common-build.xml:1577:
 java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at 
java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
at java.io.BufferedWriter.write(BufferedWriter.java:230)
at java.io.PrintWriter.write(PrintWriter.java:456)
at java.io.PrintWriter.write(PrintWriter.java:473)
at java.io.PrintWriter.print(PrintWriter.java:603)
at java.io.PrintWriter.println(PrintWriter.java:739)
at org.w3c.tidy.Report.printMessage(Report.java:754)
at org.w3c.tidy.Report.attrError(Report.java:1171)
at org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
at org.w3c.tidy.Node.checkAttributes(Node.java:343)
at org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
at org.w3c.tidy.Tidy.parse(Tidy.java:433)
at org.w3c.tidy.Tidy.parse(Tidy.java:263)
at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)

Total time: 51 minutes 11 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

RE: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Uwe Schindler
This also OOMs locally here!

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@sd-datasolutions.de]
 Sent: Thursday, October 25, 2012 3:40 PM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!
 
 Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
 Windows/1297/
 Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
 
 All tests passed
 
 Build Log:
 [...truncated 24417 lines...]
 BUILD FAILED
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:60:
 The following error occurred while executing this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
 Windows\lucene\build.xml:235: The following error occurred while executing
 this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
 Windows\lucene\common-build.xml:1577: java.lang.OutOfMemoryError:
 Java heap space
   at java.util.Arrays.copyOf(Arrays.java:2271)
   at
 java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
   at
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.jav
 a:93)
   at
 java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
   at java.io.BufferedWriter.write(BufferedWriter.java:230)
   at java.io.PrintWriter.write(PrintWriter.java:456)
   at java.io.PrintWriter.write(PrintWriter.java:473)
   at java.io.PrintWriter.print(PrintWriter.java:603)
   at java.io.PrintWriter.println(PrintWriter.java:739)
   at org.w3c.tidy.Report.printMessage(Report.java:754)
   at org.w3c.tidy.Report.attrError(Report.java:1171)
   at
 org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
   at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
   at org.w3c.tidy.Node.checkAttributes(Node.java:343)
   at
 org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
   at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
   at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
   at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
   at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
   at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
   at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
   at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
   at org.w3c.tidy.Tidy.parse(Tidy.java:433)
   at org.w3c.tidy.Tidy.parse(Tidy.java:263)
   at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
   at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
   at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)
 
 Total time: 51 minutes 11 seconds
 Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording
 test results Description set: Java: 32bit/jdk1.7.0_07 -server -
 XX:+UseParallelGC Email was triggered for: Failure Sending email for trigger:
 Failure
 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 809 - Failure

2012-10-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/809/

1 tests failed.
REGRESSION:  
org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin

Error Message:
expected:0 but was:2

Stack Trace:
java.lang.AssertionError: expected:0 but was:2
at 
__randomizedtesting.SeedInfo.seed([9829D364010F5BBC:22FBBC1C8221B5A9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin(TestContentStreamDataSource.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 

Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Robert Muir
for now, use a 64-bit jvm. ill work on tis tonight (May have to make a
custom task, the jtidy one is crap)

On Thu, Oct 25, 2012 at 10:19 AM, Uwe Schindler u...@thetaphi.de wrote:
 This also OOMs locally here!

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de

 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@sd-datasolutions.de]
 Sent: Thursday, October 25, 2012 3:40 PM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!

 Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
 Windows/1297/
 Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC

 All tests passed

 Build Log:
 [...truncated 24417 lines...]
 BUILD FAILED
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:60:
 The following error occurred while executing this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
 Windows\lucene\build.xml:235: The following error occurred while executing
 this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
 Windows\lucene\common-build.xml:1577: java.lang.OutOfMemoryError:
 Java heap space
   at java.util.Arrays.copyOf(Arrays.java:2271)
   at
 java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
   at
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.jav
 a:93)
   at
 java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
   at java.io.BufferedWriter.write(BufferedWriter.java:230)
   at java.io.PrintWriter.write(PrintWriter.java:456)
   at java.io.PrintWriter.write(PrintWriter.java:473)
   at java.io.PrintWriter.print(PrintWriter.java:603)
   at java.io.PrintWriter.println(PrintWriter.java:739)
   at org.w3c.tidy.Report.printMessage(Report.java:754)
   at org.w3c.tidy.Report.attrError(Report.java:1171)
   at
 org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
   at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
   at org.w3c.tidy.Node.checkAttributes(Node.java:343)
   at
 org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
   at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
   at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
   at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
   at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
   at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
   at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
   at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
   at org.w3c.tidy.Tidy.parse(Tidy.java:433)
   at org.w3c.tidy.Tidy.parse(Tidy.java:263)
   at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
   at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
   at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)

 Total time: 51 minutes 11 seconds
 Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording
 test results Description set: Java: 32bit/jdk1.7.0_07 -server -
 XX:+UseParallelGC Email was triggered for: Failure Sending email for trigger:
 Failure




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3993) SolrCloud leader election on single node stucks the initialization

2012-10-25 Thread Alexey Kudinov (JIRA)
Alexey Kudinov created SOLR-3993:


 Summary: SolrCloud leader election on single node stucks the 
initialization
 Key: SOLR-3993
 URL: https://issues.apache.org/jira/browse/SOLR-3993
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: Windows 7, Tomcat 6
Reporter: Alexey Kudinov


 setup:
1 node, 4 cores, 2 shards.
15 documents indexed.

problem:
init stage times out.

probable cause:
According to the init flow, cores are initialized one by one synchronously.
Actually, the main thread waits 
ShardLeaderElectionContext.waitForReplicasToComeUp until retry threshold, while 
replica cores are not yet initialized, in other words there is no chance other 
replicas go up in the meanwhile.
stack trace:
Thread [main] (Suspended)
owns: HashMapK,V  (id=3876)
owns: StandardContext  (id=3877)
owns: HashMapK,V  (id=3878)
owns: StandardHost  (id=3879)
owns: StandardEngine  (id=3880)
owns: Service[]  (id=3881)
Thread.sleep(long) line: not available [native method]
ShardLeaderElectionContext.waitForReplicasToComeUp(boolean, String) 
line: 298
ShardLeaderElectionContext.runLeaderProcess(boolean) line: 143
LeaderElector.runIamLeaderProcess(ElectionContext, boolean) line: 152
LeaderElector.checkIfIamLeader(int, ElectionContext, boolean) line: 96
LeaderElector.joinElection(ElectionContext) line: 262
ZkController.joinElection(CoreDescriptor, boolean) line: 733
ZkController.register(String, CoreDescriptor, boolean, boolean) line: 
566
ZkController.register(String, CoreDescriptor) line: 532
CoreContainer.registerInZk(SolrCore) line: 709
CoreContainer.register(String, SolrCore, boolean) line: 693
CoreContainer.load(String, InputSource) line: 535
CoreContainer.load(String, File) line: 356
CoreContainer$Initializer.initialize() line: 308
SolrDispatchFilter.init(FilterConfig) line: 107
ApplicationFilterConfig.getFilter() line: 295
ApplicationFilterConfig.setFilterDef(FilterDef) line: 422
ApplicationFilterConfig.init(Context, FilterDef) line: 115
StandardContext.filterStart() line: 4072
StandardContext.start() line: 4726
StandardHost(ContainerBase).addChildInternal(Container) line: 799
StandardHost(ContainerBase).addChild(Container) line: 779
StandardHost.addChild(Container) line: 601
HostConfig.deployDescriptor(String, File, String) line: 675
HostConfig.deployDescriptors(File, String[]) line: 601
HostConfig.deployApps() line: 502
HostConfig.start() line: 1317
HostConfig.lifecycleEvent(LifecycleEvent) line: 324
LifecycleSupport.fireLifecycleEvent(String, Object) line: 142
StandardHost(ContainerBase).start() line: 1065
StandardHost.start() line: 840
StandardEngine(ContainerBase).start() line: 1057
StandardEngine.start() line: 463
StandardService.start() line: 525
StandardServer.start() line: 754
Catalina.start() line: 595
NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not 
available [native method]
NativeMethodAccessorImpl.invoke(Object, Object[]) line: not available
DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: not 
available
Method.invoke(Object, Object...) line: not available
Bootstrap.start() line: 289
Bootstrap.main(String[]) line: 414

   
After a while, the session times out and following exception appears:
Oct 25, 2012 1:16:56 PM org.apache.solr.cloud.ShardLeaderElectionContext 
waitForReplicasToComeUp
INFO: Waiting until we see more replicas up: total=2 found=0 timeoutin=-95
Oct 25, 2012 1:16:56 PM org.apache.solr.cloud.ShardLeaderElectionContext 
waitForReplicasToComeUp
INFO: Was waiting for replicas to come up, but they are taking too long - 
assuming they won't come back till later
Oct 25, 2012 1:16:56 PM org.apache.solr.common.SolrException log
SEVERE: Errir checking for the number of election 
participants:org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for 
/collections/collection1/leader_elect/shard2/election
at org.apache.zookeeper.KeeperException.create(KeeperException.java:118)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1249)
at 
org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:227)
at 
org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:224)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:63)
at 

RE: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Uwe Schindler
I committed a 64bit-only workaround for now...

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, October 25, 2012 5:03 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!
 
 for now, use a 64-bit jvm. ill work on tis tonight (May have to make a custom
 task, the jtidy one is crap)
 
 On Thu, Oct 25, 2012 at 10:19 AM, Uwe Schindler u...@thetaphi.de wrote:
  This also OOMs locally here!
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  -Original Message-
  From: Policeman Jenkins Server [mailto:jenk...@sd-datasolutions.de]
  Sent: Thursday, October 25, 2012 3:40 PM
  To: dev@lucene.apache.org
  Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
  Build #
  1297 - Still Failing!
 
  Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
  Windows/1297/
  Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
 
  All tests passed
 
  Build Log:
  [...truncated 24417 lines...]
  BUILD FAILED
  C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
 Windows\build.xml:60:
  The following error occurred while executing this line:
  C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
  Windows\lucene\build.xml:235: The following error occurred while
  executing this line:
  C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
  Windows\lucene\common-build.xml:1577: java.lang.OutOfMemoryError:
  Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at
  java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at
 
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
  v
  a:93)
at
  java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
at java.io.BufferedWriter.write(BufferedWriter.java:230)
at java.io.PrintWriter.write(PrintWriter.java:456)
at java.io.PrintWriter.write(PrintWriter.java:473)
at java.io.PrintWriter.print(PrintWriter.java:603)
at java.io.PrintWriter.println(PrintWriter.java:739)
at org.w3c.tidy.Report.printMessage(Report.java:754)
at org.w3c.tidy.Report.attrError(Report.java:1171)
at
  org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
at org.w3c.tidy.Node.checkAttributes(Node.java:343)
at
  org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
at org.w3c.tidy.Tidy.parse(Tidy.java:433)
at org.w3c.tidy.Tidy.parse(Tidy.java:263)
at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)
 
  Total time: 51 minutes 11 seconds
  Build step 'Invoke Ant' marked build as failure Archiving artifacts
  Recording test results Description set: Java: 32bit/jdk1.7.0_07
  -server - XX:+UseParallelGC Email was triggered for: Failure Sending email
 for trigger:
  Failure
 
 
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Robert Muir
Thanks. I tried a blind stab (trunk-only).

The issue is the built in task makes a ByteArrayOutputStream or
whatever for all the output. So I think this grows large on all the
files.

unfortunately if we use quiet=true, the task no longer fails on
error?! But i told it not to emit any warnings, so it writes about
half as much.

still this task is really annoying, we probably just need a custom one.

On Thu, Oct 25, 2012 at 11:55 AM, Uwe Schindler u...@thetaphi.de wrote:
 I committed a 64bit-only workaround for now...

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, October 25, 2012 5:03 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!

 for now, use a 64-bit jvm. ill work on tis tonight (May have to make a custom
 task, the jtidy one is crap)

 On Thu, Oct 25, 2012 at 10:19 AM, Uwe Schindler u...@thetaphi.de wrote:
  This also OOMs locally here!
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  -Original Message-
  From: Policeman Jenkins Server [mailto:jenk...@sd-datasolutions.de]
  Sent: Thursday, October 25, 2012 3:40 PM
  To: dev@lucene.apache.org
  Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
  Build #
  1297 - Still Failing!
 
  Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
  Windows/1297/
  Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
 
  All tests passed
 
  Build Log:
  [...truncated 24417 lines...]
  BUILD FAILED
  C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
 Windows\build.xml:60:
  The following error occurred while executing this line:
  C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
  Windows\lucene\build.xml:235: The following error occurred while
  executing this line:
  C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
  Windows\lucene\common-build.xml:1577: java.lang.OutOfMemoryError:
  Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at
  java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at
 
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
  v
  a:93)
at
  java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
at java.io.BufferedWriter.write(BufferedWriter.java:230)
at java.io.PrintWriter.write(PrintWriter.java:456)
at java.io.PrintWriter.write(PrintWriter.java:473)
at java.io.PrintWriter.print(PrintWriter.java:603)
at java.io.PrintWriter.println(PrintWriter.java:739)
at org.w3c.tidy.Report.printMessage(Report.java:754)
at org.w3c.tidy.Report.attrError(Report.java:1171)
at
  org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
at org.w3c.tidy.Node.checkAttributes(Node.java:343)
at
  org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
at org.w3c.tidy.Tidy.parse(Tidy.java:433)
at org.w3c.tidy.Tidy.parse(Tidy.java:263)
at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)
 
  Total time: 51 minutes 11 seconds
  Build step 'Invoke Ant' marked build as failure Archiving artifacts
  Recording test results Description set: Java: 32bit/jdk1.7.0_07
  -server - XX:+UseParallelGC Email was triggered for: Failure Sending email
 for trigger:
  Failure
 
 
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 

RE: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Uwe Schindler
Can we use groovy to instantiate Tidy classes and pass filesets? We don't need 
a task if it is only missing some new Tidy(...).check(...)

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, October 25, 2012 6:40 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!
 
 Thanks. I tried a blind stab (trunk-only).
 
 The issue is the built in task makes a ByteArrayOutputStream or whatever for
 all the output. So I think this grows large on all the files.
 
 unfortunately if we use quiet=true, the task no longer fails on error?! But i
 told it not to emit any warnings, so it writes about half as much.
 
 still this task is really annoying, we probably just need a custom one.
 
 On Thu, Oct 25, 2012 at 11:55 AM, Uwe Schindler u...@thetaphi.de wrote:
  I committed a 64bit-only workaround for now...
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Robert Muir [mailto:rcm...@gmail.com]
  Sent: Thursday, October 25, 2012 5:03 PM
  To: dev@lucene.apache.org
  Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
  Build #
  1297 - Still Failing!
 
  for now, use a 64-bit jvm. ill work on tis tonight (May have to make
  a custom task, the jtidy one is crap)
 
  On Thu, Oct 25, 2012 at 10:19 AM, Uwe Schindler u...@thetaphi.de
 wrote:
   This also OOMs locally here!
  
   -
   Uwe Schindler
   H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
   eMail: u...@thetaphi.de
  
   -Original Message-
   From: Policeman Jenkins Server
   [mailto:jenk...@sd-datasolutions.de]
   Sent: Thursday, October 25, 2012 3:40 PM
   To: dev@lucene.apache.org
   Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
   Build #
   1297 - Still Failing!
  
   Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
   Windows/1297/
   Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
  
   All tests passed
  
   Build Log:
   [...truncated 24417 lines...]
   BUILD FAILED
   C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
  Windows\build.xml:60:
   The following error occurred while executing this line:
   C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
   Windows\lucene\build.xml:235: The following error occurred while
   executing this line:
   C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
   Windows\lucene\common-build.xml:1577:
 java.lang.OutOfMemoryError:
   Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2271)
 at
  
 java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
 at
  
 
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
   v
   a:93)
 at
  
 java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
 at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
 at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
 at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
 at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
 at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
 at java.io.BufferedWriter.write(BufferedWriter.java:230)
 at java.io.PrintWriter.write(PrintWriter.java:456)
 at java.io.PrintWriter.write(PrintWriter.java:473)
 at java.io.PrintWriter.print(PrintWriter.java:603)
 at java.io.PrintWriter.println(PrintWriter.java:739)
 at org.w3c.tidy.Report.printMessage(Report.java:754)
 at org.w3c.tidy.Report.attrError(Report.java:1171)
 at
  
 org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
 at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
 at org.w3c.tidy.Node.checkAttributes(Node.java:343)
 at
  
 org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
 at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
 at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
 at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
 at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
 at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
 at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
 at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
 at org.w3c.tidy.Tidy.parse(Tidy.java:433)
 at org.w3c.tidy.Tidy.parse(Tidy.java:263)
 at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
 at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
 at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)
  
   Total time: 51 minutes 11 seconds
   Build step 'Invoke Ant' marked build as failure Archiving
   artifacts Recording test 

[jira] [Commented] (LUCENE-4504) Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues

2012-10-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484272#comment-13484272
 ] 

Michael McCandless commented on LUCENE-4504:


That's definitely a bug, and the fix looks good (though are we sure the first 
if shouldn't return 1?).

Do you have a test showing the bug/fix?  Thanks.

 Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues
 -

 Key: LUCENE-4504
 URL: https://issues.apache.org/jira/browse/LUCENE-4504
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.0
Reporter: TomShally
Priority: Minor
 Attachments: LUCENE-4504.patch


 IS.searchAfter() always returns an empty result when using FunctionValues for 
 sorting.
 The culprit is ValueSourceComparator.compareDocToValue() returning -1 when it 
 should return +1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Robert Muir
I dont know groovy, but this sounds great.

I only used the built-in task to find the javadocs bugs quickly and
because it was much easier, but i hate it

The worst thing about the built-in task is it puts all its output in
this ByteArrayStream instead of a file, so if you actually have broken
docs, it just tells you jtidy failed but hides all the useful
output.


On Thu, Oct 25, 2012 at 12:50 PM, Uwe Schindler u...@thetaphi.de wrote:
 Can we use groovy to instantiate Tidy classes and pass filesets? We don't 
 need a task if it is only missing some new Tidy(...).check(...)

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, October 25, 2012 6:40 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!

 Thanks. I tried a blind stab (trunk-only).

 The issue is the built in task makes a ByteArrayOutputStream or whatever for
 all the output. So I think this grows large on all the files.

 unfortunately if we use quiet=true, the task no longer fails on error?! But i
 told it not to emit any warnings, so it writes about half as much.

 still this task is really annoying, we probably just need a custom one.

 On Thu, Oct 25, 2012 at 11:55 AM, Uwe Schindler u...@thetaphi.de wrote:
  I committed a 64bit-only workaround for now...
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Robert Muir [mailto:rcm...@gmail.com]
  Sent: Thursday, October 25, 2012 5:03 PM
  To: dev@lucene.apache.org
  Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
  Build #
  1297 - Still Failing!
 
  for now, use a 64-bit jvm. ill work on tis tonight (May have to make
  a custom task, the jtidy one is crap)
 
  On Thu, Oct 25, 2012 at 10:19 AM, Uwe Schindler u...@thetaphi.de
 wrote:
   This also OOMs locally here!
  
   -
   Uwe Schindler
   H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
   eMail: u...@thetaphi.de
  
   -Original Message-
   From: Policeman Jenkins Server
   [mailto:jenk...@sd-datasolutions.de]
   Sent: Thursday, October 25, 2012 3:40 PM
   To: dev@lucene.apache.org
   Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
   Build #
   1297 - Still Failing!
  
   Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
   Windows/1297/
   Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
  
   All tests passed
  
   Build Log:
   [...truncated 24417 lines...]
   BUILD FAILED
   C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
  Windows\build.xml:60:
   The following error occurred while executing this line:
   C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
   Windows\lucene\build.xml:235: The following error occurred while
   executing this line:
   C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
   Windows\lucene\common-build.xml:1577:
 java.lang.OutOfMemoryError:
   Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2271)
 at
  
 java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
 at
  
 
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
   v
   a:93)
 at
  
 java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
 at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
 at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
 at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
 at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
 at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
 at java.io.BufferedWriter.write(BufferedWriter.java:230)
 at java.io.PrintWriter.write(PrintWriter.java:456)
 at java.io.PrintWriter.write(PrintWriter.java:473)
 at java.io.PrintWriter.print(PrintWriter.java:603)
 at java.io.PrintWriter.println(PrintWriter.java:739)
 at org.w3c.tidy.Report.printMessage(Report.java:754)
 at org.w3c.tidy.Report.attrError(Report.java:1171)
 at
  
 org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
 at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
 at org.w3c.tidy.Node.checkAttributes(Node.java:343)
 at
  
 org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489)
 at org.w3c.tidy.Lexer.getToken(Lexer.java:2431)
 at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051)
 at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
 at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
 at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
 at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
 at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
 

RE: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Uwe Schindler
Ok, can you open issue and give me some hints about the (Java-)APIs to use, I 
can code it :-)

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, October 25, 2012 6:56 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!
 
 I dont know groovy, but this sounds great.
 
 I only used the built-in task to find the javadocs bugs quickly and because it
 was much easier, but i hate it
 
 The worst thing about the built-in task is it puts all its output in this
 ByteArrayStream instead of a file, so if you actually have broken docs, it 
 just
 tells you jtidy failed but hides all the useful output.
 
 
 On Thu, Oct 25, 2012 at 12:50 PM, Uwe Schindler u...@thetaphi.de wrote:
  Can we use groovy to instantiate Tidy classes and pass filesets? We don't
 need a task if it is only missing some new Tidy(...).check(...)
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Robert Muir [mailto:rcm...@gmail.com]
  Sent: Thursday, October 25, 2012 6:40 PM
  To: dev@lucene.apache.org
  Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
  Build #
  1297 - Still Failing!
 
  Thanks. I tried a blind stab (trunk-only).
 
  The issue is the built in task makes a ByteArrayOutputStream or
  whatever for all the output. So I think this grows large on all the files.
 
  unfortunately if we use quiet=true, the task no longer fails on
  error?! But i told it not to emit any warnings, so it writes about half as
 much.
 
  still this task is really annoying, we probably just need a custom one.
 
  On Thu, Oct 25, 2012 at 11:55 AM, Uwe Schindler u...@thetaphi.de
 wrote:
   I committed a 64bit-only workaround for now...
  
   -
   Uwe Schindler
   H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
   eMail: u...@thetaphi.de
  
  
   -Original Message-
   From: Robert Muir [mailto:rcm...@gmail.com]
   Sent: Thursday, October 25, 2012 5:03 PM
   To: dev@lucene.apache.org
   Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
   Build #
   1297 - Still Failing!
  
   for now, use a 64-bit jvm. ill work on tis tonight (May have to make
   a custom task, the jtidy one is crap)
  
   On Thu, Oct 25, 2012 at 10:19 AM, Uwe Schindler u...@thetaphi.de
  wrote:
This also OOMs locally here!
   
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
eMail: u...@thetaphi.de
   
-Original Message-
From: Policeman Jenkins Server
[mailto:jenk...@sd-datasolutions.de]
Sent: Thursday, October 25, 2012 3:40 PM
To: dev@lucene.apache.org
Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
Build #
1297 - Still Failing!
   
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
Windows/1297/
Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
   
All tests passed
   
Build Log:
[...truncated 24417 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
   Windows\build.xml:60:
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
Windows\lucene\build.xml:235: The following error occurred while
executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
Windows\lucene\common-build.xml:1577:
  java.lang.OutOfMemoryError:
Java heap space
  at java.util.Arrays.copyOf(Arrays.java:2271)
  at
   
  java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
  at
   
  
 
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
v
a:93)
  at
   
  java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
  at
 sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
  at
 sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
  at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
  at
 java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
  at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
  at java.io.BufferedWriter.write(BufferedWriter.java:230)
  at java.io.PrintWriter.write(PrintWriter.java:456)
  at java.io.PrintWriter.write(PrintWriter.java:473)
  at java.io.PrintWriter.print(PrintWriter.java:603)
  at java.io.PrintWriter.println(PrintWriter.java:739)
  at org.w3c.tidy.Report.printMessage(Report.java:754)
  at org.w3c.tidy.Report.attrError(Report.java:1171)
  at
   
  org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843)
  at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265)
  at 

[jira] [Created] (LUCENE-4505) improve jtidy javadocs check

2012-10-25 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-4505:
---

 Summary: improve jtidy javadocs check
 Key: LUCENE-4505
 URL: https://issues.apache.org/jira/browse/LUCENE-4505
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir


Currently we are using the ant task 
(http://sourceforge.net/p/jtidy/code/1261/tree/trunk/jtidy/src/main/java/org/w3c/tidy/ant/JTidyTask.java)
 built into jtidy itself.

This has a number of disadvantages:
* at least in the version we are using, creates a ByteArrayDataOutput that 
hides all the output. So if there is an error, its no good.
* requires creation of a temp directory: even though we disable the actual 
output with a parameter, this means it creates thousands of 0 byte files

We only pass 3 options to tidy today:
* input-encoding=UTF-8
* only-errors=true
* show-warnings=false -- this one is a OOM hack.

Ideally i think we would:
* pass input-encoding=UTF-8, only-errors=true, quiet=true.
* send all output to a single file or property.
* if this contains any contents, fail and print the contents.

This would mean we would fail on warnings too (I checked, this is a good thing, 
there would be some things to fix).
So as a start we could just set show-warnings=false temporarily so we only fail 
on errors like today.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Robert Muir
https://issues.apache.org/jira/browse/LUCENE-4505

On Thu, Oct 25, 2012 at 1:48 PM, Uwe Schindler u...@thetaphi.de wrote:
 Ok, can you open issue and give me some hints about the (Java-)APIs to use, I 
 can code it :-)

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, October 25, 2012 6:56 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!

 I dont know groovy, but this sounds great.

 I only used the built-in task to find the javadocs bugs quickly and because 
 it
 was much easier, but i hate it

 The worst thing about the built-in task is it puts all its output in this
 ByteArrayStream instead of a file, so if you actually have broken docs, it 
 just
 tells you jtidy failed but hides all the useful output.


 On Thu, Oct 25, 2012 at 12:50 PM, Uwe Schindler u...@thetaphi.de wrote:
  Can we use groovy to instantiate Tidy classes and pass filesets? We don't
 need a task if it is only missing some new Tidy(...).check(...)
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Robert Muir [mailto:rcm...@gmail.com]
  Sent: Thursday, October 25, 2012 6:40 PM
  To: dev@lucene.apache.org
  Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
  Build #
  1297 - Still Failing!
 
  Thanks. I tried a blind stab (trunk-only).
 
  The issue is the built in task makes a ByteArrayOutputStream or
  whatever for all the output. So I think this grows large on all the files.
 
  unfortunately if we use quiet=true, the task no longer fails on
  error?! But i told it not to emit any warnings, so it writes about half as
 much.
 
  still this task is really annoying, we probably just need a custom one.
 
  On Thu, Oct 25, 2012 at 11:55 AM, Uwe Schindler u...@thetaphi.de
 wrote:
   I committed a 64bit-only workaround for now...
  
   -
   Uwe Schindler
   H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
   eMail: u...@thetaphi.de
  
  
   -Original Message-
   From: Robert Muir [mailto:rcm...@gmail.com]
   Sent: Thursday, October 25, 2012 5:03 PM
   To: dev@lucene.apache.org
   Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
   Build #
   1297 - Still Failing!
  
   for now, use a 64-bit jvm. ill work on tis tonight (May have to make
   a custom task, the jtidy one is crap)
  
   On Thu, Oct 25, 2012 at 10:19 AM, Uwe Schindler u...@thetaphi.de
  wrote:
This also OOMs locally here!
   
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
eMail: u...@thetaphi.de
   
-Original Message-
From: Policeman Jenkins Server
[mailto:jenk...@sd-datasolutions.de]
Sent: Thursday, October 25, 2012 3:40 PM
To: dev@lucene.apache.org
Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
Build #
1297 - Still Failing!
   
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
Windows/1297/
Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
   
All tests passed
   
Build Log:
[...truncated 24417 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
   Windows\build.xml:60:
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
Windows\lucene\build.xml:235: The following error occurred while
executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
Windows\lucene\common-build.xml:1577:
  java.lang.OutOfMemoryError:
Java heap space
  at java.util.Arrays.copyOf(Arrays.java:2271)
  at
   
  java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
  at
   
  
 
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
v
a:93)
  at
   
  java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
  at
 sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
  at
 sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
  at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
  at
 java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
  at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
  at java.io.BufferedWriter.write(BufferedWriter.java:230)
  at java.io.PrintWriter.write(PrintWriter.java:456)
  at java.io.PrintWriter.write(PrintWriter.java:473)
  at java.io.PrintWriter.print(PrintWriter.java:603)
  at java.io.PrintWriter.println(PrintWriter.java:739)
  at org.w3c.tidy.Report.printMessage(Report.java:754)
  at org.w3c.tidy.Report.attrError(Report.java:1171)
  at
   
  

Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1297 - Still Failing!

2012-10-25 Thread Robert Muir
I can provide more info later, if you can even get a rough sketch or
something i can also try to help...

On Thu, Oct 25, 2012 at 1:58 PM, Robert Muir rcm...@gmail.com wrote:
 https://issues.apache.org/jira/browse/LUCENE-4505

 On Thu, Oct 25, 2012 at 1:48 PM, Uwe Schindler u...@thetaphi.de wrote:
 Ok, can you open issue and give me some hints about the (Java-)APIs to use, 
 I can code it :-)

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, October 25, 2012 6:56 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build #
 1297 - Still Failing!

 I dont know groovy, but this sounds great.

 I only used the built-in task to find the javadocs bugs quickly and because 
 it
 was much easier, but i hate it

 The worst thing about the built-in task is it puts all its output in this
 ByteArrayStream instead of a file, so if you actually have broken docs, it 
 just
 tells you jtidy failed but hides all the useful output.


 On Thu, Oct 25, 2012 at 12:50 PM, Uwe Schindler u...@thetaphi.de wrote:
  Can we use groovy to instantiate Tidy classes and pass filesets? We don't
 need a task if it is only missing some new Tidy(...).check(...)
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Robert Muir [mailto:rcm...@gmail.com]
  Sent: Thursday, October 25, 2012 6:40 PM
  To: dev@lucene.apache.org
  Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
  Build #
  1297 - Still Failing!
 
  Thanks. I tried a blind stab (trunk-only).
 
  The issue is the built in task makes a ByteArrayOutputStream or
  whatever for all the output. So I think this grows large on all the 
  files.
 
  unfortunately if we use quiet=true, the task no longer fails on
  error?! But i told it not to emit any warnings, so it writes about half 
  as
 much.
 
  still this task is really annoying, we probably just need a custom one.
 
  On Thu, Oct 25, 2012 at 11:55 AM, Uwe Schindler u...@thetaphi.de
 wrote:
   I committed a 64bit-only workaround for now...
  
   -
   Uwe Schindler
   H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
   eMail: u...@thetaphi.de
  
  
   -Original Message-
   From: Robert Muir [mailto:rcm...@gmail.com]
   Sent: Thursday, October 25, 2012 5:03 PM
   To: dev@lucene.apache.org
   Subject: Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
   Build #
   1297 - Still Failing!
  
   for now, use a 64-bit jvm. ill work on tis tonight (May have to make
   a custom task, the jtidy one is crap)
  
   On Thu, Oct 25, 2012 at 10:19 AM, Uwe Schindler u...@thetaphi.de
  wrote:
This also OOMs locally here!
   
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
eMail: u...@thetaphi.de
   
-Original Message-
From: Policeman Jenkins Server
[mailto:jenk...@sd-datasolutions.de]
Sent: Thursday, October 25, 2012 3:40 PM
To: dev@lucene.apache.org
Subject: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) -
Build #
1297 - Still Failing!
   
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-
Windows/1297/
Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC
   
All tests passed
   
Build Log:
[...truncated 24417 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
   Windows\build.xml:60:
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
Windows\lucene\build.xml:235: The following error occurred while
executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-
Windows\lucene\common-build.xml:1577:
  java.lang.OutOfMemoryError:
Java heap space
  at java.util.Arrays.copyOf(Arrays.java:2271)
  at
   
  java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
  at
   
  
 
 java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
v
a:93)
  at
   
  java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
  at
 sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
  at
 sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
  at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
  at
 java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
  at 
java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129)
  at java.io.BufferedWriter.write(BufferedWriter.java:230)
  at java.io.PrintWriter.write(PrintWriter.java:456)
  at java.io.PrintWriter.write(PrintWriter.java:473)
  at java.io.PrintWriter.print(PrintWriter.java:603)
  at java.io.PrintWriter.println(PrintWriter.java:739)
  

[jira] [Commented] (LUCENE-4505) improve jtidy javadocs check

2012-10-25 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484332#comment-13484332
 ] 

Robert Muir commented on LUCENE-4505:
-

Here is the commandline equivalent: say i screw up our lucene/docs/index.html 
and add a unclosed bold tag in the getting started paragraph,
and a bogus tag at the end:
{noformat}
rmuir@beast:~/workspace/lucene-trunk/lucene/build/docs$ java -jar 
~/Downloads/jtidy-r938.jar -e -q index.html
line 1 column 1 - Warning: missing !DOCTYPE declaration
line 24 column 62 - Warning: missing /b before /p
line 27 column 1 - Warning: inserting implicit b
line 111 column 1 - Error: dfdsfdsf is not recognized!
line 111 column 1 - Warning: content occurs after end of body
line 111 column 1 - Warning: discarding unexpected dfdsfdsf
line 112 column -3 - Warning: content occurs after end of body
line 112 column -3 - Warning: discarding unexpected /dfdsfdsf
{noformat}

Basically we want to fail if there is any output like this at all. Note only 
one of the problems is an error!
The Warnings are also bogus things we should fix.

NOTE: there are some false warnings that are bugs in 'javadocs itself', but 
it seems we could just filter those out:
{noformat}
rmuir@beast:~/workspace/lucene-trunk/lucene/build/docs$ java -jar 
~/Downloads/jtidy-r938.jar -e -q core/deprecated-list.html 
line 152 column 20 - Warning: a escaping malformed URI reference
{noformat}

Thats because javadoc generates bogus urls like a 
href=org/apache/lucene/search/FuzzyQuery.html#floatToEdits(float, int)
instead of escaping with %20... 

 improve jtidy javadocs check
 

 Key: LUCENE-4505
 URL: https://issues.apache.org/jira/browse/LUCENE-4505
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Currently we are using the ant task 
 (http://sourceforge.net/p/jtidy/code/1261/tree/trunk/jtidy/src/main/java/org/w3c/tidy/ant/JTidyTask.java)
  built into jtidy itself.
 This has a number of disadvantages:
 * at least in the version we are using, creates a ByteArrayDataOutput that 
 hides all the output. So if there is an error, its no good.
 * requires creation of a temp directory: even though we disable the actual 
 output with a parameter, this means it creates thousands of 0 byte files
 We only pass 3 options to tidy today:
 * input-encoding=UTF-8
 * only-errors=true
 * show-warnings=false -- this one is a OOM hack.
 Ideally i think we would:
 * pass input-encoding=UTF-8, only-errors=true, quiet=true.
 * send all output to a single file or property.
 * if this contains any contents, fail and print the contents.
 This would mean we would fail on warnings too (I checked, this is a good 
 thing, there would be some things to fix).
 So as a start we could just set show-warnings=false temporarily so we only 
 fail on errors like today.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_07) - Build # 1302 - Failure!

2012-10-25 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1302/
Java: 32bit/jdk1.7.0_07 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 24589 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:60: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:235: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1577:
 java.lang.OutOfMemoryError: Java heap space
at java.nio.CharBuffer.wrap(CharBuffer.java:369)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:310)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at sun.nio.cs.StreamDecoder.read0(StreamDecoder.java:126)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:112)
at java.io.InputStreamReader.read(InputStreamReader.java:168)
at 
org.w3c.tidy.StreamInJavaImpl.readCharFromStream(StreamInJavaImpl.java:164)
at org.w3c.tidy.StreamInJavaImpl.readChar(StreamInJavaImpl.java:232)
at org.w3c.tidy.Lexer.getToken(Lexer.java:1944)
at org.w3c.tidy.ParserImpl$ParseList.parse(ParserImpl.java:1620)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2464)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2464)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
at org.w3c.tidy.Tidy.parse(Tidy.java:433)
at org.w3c.tidy.Tidy.parse(Tidy.java:263)
at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)

Total time: 52 minutes 9 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.7.0_07 -client -XX:+UseG1GC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_07) - Build # 1302 - Failure!

2012-10-25 Thread Robert Muir
my idea didnt work. ill re-disable on 32-bit for now.

On Thu, Oct 25, 2012 at 2:29 PM, Policeman Jenkins Server
jenk...@sd-datasolutions.de wrote:
 Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1302/
 Java: 32bit/jdk1.7.0_07 -client -XX:+UseG1GC

 All tests passed

 Build Log:
 [...truncated 24589 lines...]
 BUILD FAILED
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:60: The 
 following error occurred while executing this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:235:
  The following error occurred while executing this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1577:
  java.lang.OutOfMemoryError: Java heap space
 at java.nio.CharBuffer.wrap(CharBuffer.java:369)
 at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:310)
 at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
 at sun.nio.cs.StreamDecoder.read0(StreamDecoder.java:126)
 at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:112)
 at java.io.InputStreamReader.read(InputStreamReader.java:168)
 at 
 org.w3c.tidy.StreamInJavaImpl.readCharFromStream(StreamInJavaImpl.java:164)
 at org.w3c.tidy.StreamInJavaImpl.readChar(StreamInJavaImpl.java:232)
 at org.w3c.tidy.Lexer.getToken(Lexer.java:1944)
 at org.w3c.tidy.ParserImpl$ParseList.parse(ParserImpl.java:1620)
 at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
 at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2464)
 at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
 at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2464)
 at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
 at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971)
 at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203)
 at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483)
 at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401)
 at org.w3c.tidy.Tidy.parse(Tidy.java:433)
 at org.w3c.tidy.Tidy.parse(Tidy.java:263)
 at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457)
 at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420)
 at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364)
 at 
 org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
 at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
 at org.apache.tools.ant.Task.perform(Task.java:348)
 at 
 org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68)
 at 
 org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)

 Total time: 52 minutes 9 seconds
 Build step 'Invoke Ant' marked build as failure
 Archiving artifacts
 Recording test results
 Description set: Java: 32bit/jdk1.7.0_07 -client -XX:+UseG1GC
 Email was triggered for: Failure
 Sending email for trigger: Failure


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4504) Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues

2012-10-25 Thread TomShally (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TomShally updated LUCENE-4504:
--

Attachment: Lucene4504Test.java

Attached test fails with trunk, passes with patch.

bq. are we sure the first if shouldn't return 1?

According to the javadocs and test: yes. It's supposed to compare the candidate 
(doc, docValue) against the after value (valueObj, value) and return -1 if 
the candidate is less than the provided after value.

 Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues
 -

 Key: LUCENE-4504
 URL: https://issues.apache.org/jira/browse/LUCENE-4504
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.0
Reporter: TomShally
Priority: Minor
 Attachments: LUCENE-4504.patch, Lucene4504Test.java


 IS.searchAfter() always returns an empty result when using FunctionValues for 
 sorting.
 The culprit is ValueSourceComparator.compareDocToValue() returning -1 when it 
 should return +1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3984) Solr Admin Unload with deleteInstanceDir=true fails unless the path is absolute.

2012-10-25 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-3984.
--

Resolution: Fixed

r: 1402254 (trunk)
r: 1402282 (4x branch)


 Solr Admin Unload with deleteInstanceDir=true fails unless the path is 
 absolute.
 

 Key: SOLR-3984
 URL: https://issues.apache.org/jira/browse/SOLR-3984
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0
Reporter: Raintung Li
Assignee: Erick Erickson
 Fix For: 4.1, 5.0

 Attachments: patch.txt, SOLR-3984.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Call URL :
 http://localhost:8983/solr/admin/cores?action=UNLOADdeleteInstanceDir=truecore=mycollection1qt=/admin/cores
 Check the disk path:
 folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but 
 caller response is success.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1293) Support for large no:of cores and faster loading/unloading of cores

2012-10-25 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484479#comment-13484479
 ] 

Erick Erickson commented on SOLR-1293:
--

I've implemented some parts of this (SOLR-880, SOLR-1028), I should be checking 
them in sometime relatively soon, then on to some other JIRAs related to this 
one. But I got to thinking that maybe what we really want is two new 
characteristics for cores, call the loadOnStartup(T|F, default T) and 
sticky(T|F, default T). 

What I've done so far conflates the two ideas; things loaded lazily are 
assumed to be NOT sticky and there's really no reason to conflate them. Use 
cases are

LOS=T, STICKY=T - really, what we have now. Pay the penalty on startup for 
loading the core at startup in exchange for speed later.

LOS=T, STICKY=F - load on startup, but allow the core to be automatically 
unloaded later. For preloading expected 'hot' cores. Cores are unloaded on an 
LRU basis. NOTE: a core can be unloaded and then loaded again later if it's 
referenced.

LOS=F, STICKY=T - Defer loading the core, but once it's loaded, keep it loaded. 
Get's us started fast, amortizes loading the core. This one I actually expect 
to be the least useful, but it's a consequence of the others and doesn't cost 
anything extra to implement coding-wise.

LOS=F, STICKY=F - what I was originally thinking of as lazy loading. Cores 
get loaded when first referenced, and swapped out on an LRU algorithm.

Looking at what I've done on the two JIRA's mentioned, this is actually not at 
all difficult, just a matter of putting the CoreConfig in the right list...

So, if any STICKY=F is found, there's a LRU cache created (actually a 
LinkedHashMap with removeEldestEntry overridden), with an optional size 
specified in the cores... tag. I'd guess I'll default it to 100 or some such 
if (and only if) there's at least one STICKY=F defined but no cache size in 
cores Of course if the user defined cacheSize in cores..., I'd allocate 
the cache up front.

Thoughts?

 Support for large no:of cores and faster loading/unloading of cores
 ---

 Key: SOLR-1293
 URL: https://issues.apache.org/jira/browse/SOLR-1293
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Reporter: Noble Paul
 Fix For: 4.1

 Attachments: SOLR-1293.patch


 Solr , currently ,is not very suitable for a large no:of homogeneous cores 
 where you require fast/frequent loading/unloading of cores . usually a core 
 is required to be loaded just to fire a search query or to just index one 
 document
 The requirements of such a system are.
 * Very efficient loading of cores . Solr cannot afford to read and parse and 
 create Schema, SolrConfig Objects for each core each time the core has to be 
 loaded ( SOLR-919 , SOLR-920)
 * START STOP core . Currently it is only possible to unload a core (SOLR-880)
 * Automatic loading of cores . If a core is present and it is not loaded and 
 a request comes for that load it automatically before serving up a request
 * As there are a large no:of cores , all the cores cannot be kept loaded 
 always. There has to be an upper limit beyond which we need to unload a few 
 cores (probably the least recently used ones)
 * Automatic allotment of dataDir for cores. If the no:of cores is too high al 
 the cores' dataDirs cannot live in the same dir. There is an upper limit on 
 the no:of dirs you can create in a unix dir w/o affecting performance

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1402078 - in /lucene/dev/branches/LUCENE-2878/lucene: core/src/java/org/apache/lucene/search/ core/src/java/org/apache/lucene/search/positions/ core/src/test/org/apache/lucene/search/

2012-10-25 Thread Simon Willnauer
On Thu, Oct 25, 2012 at 12:09 PM,  romseyg...@apache.org wrote:
 Author: romseygeek
 Date: Thu Oct 25 10:09:33 2012
 New Revision: 1402078

 URL: http://svn.apache.org/viewvc?rev=1402078view=rev
 Log:
 Move IntervalFilter and IntervalCollector to top-level classes; add javadocs

 Added:
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/IntervalCollector.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/IntervalFilter.java
 Modified:
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/PhraseScorer.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/IntervalFilterQuery.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/IntervalIterator.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/NonOverlappingQuery.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/OrderedNearQuery.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/RangeIntervalIterator.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/SnapshotPositionCollector.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/UnorderedNearQuery.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/WithinIntervalIterator.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/WithinOrderedFilter.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/test/org/apache/lucene/search/positions/TestBasicIntervals.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/test/org/apache/lucene/search/positions/TestBlockIntervalIterator.java
 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/test/org/apache/lucene/search/positions/TestOrderedConjunctionIntervalIterator.java
 
 lucene/dev/branches/LUCENE-2878/lucene/highlighter/src/java/org/apache/lucene/search/highlight/positions/ArrayIntervalIterator.java
 
 lucene/dev/branches/LUCENE-2878/lucene/highlighter/src/java/org/apache/lucene/search/highlight/positions/HighlightingIntervalCollector.java
 
 lucene/dev/branches/LUCENE-2878/lucene/highlighter/src/test/org/apache/lucene/search/highlight/positions/IntervalHighlighterTest.java

 Modified: 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/PhraseScorer.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/PhraseScorer.java?rev=1402078r1=1402077r2=1402078view=diff
 ==
 --- 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/PhraseScorer.java
  (original)
 +++ 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/PhraseScorer.java
  Thu Oct 25 10:09:33 2012
 @@ -20,7 +20,7 @@ package org.apache.lucene.search;
  import org.apache.lucene.index.DocsAndPositionsEnum;
  import org.apache.lucene.search.positions.Interval;
  import org.apache.lucene.search.positions.IntervalIterator;
 -import org.apache.lucene.search.positions.IntervalIterator.IntervalCollector;
 +import org.apache.lucene.search.positions.IntervalCollector;
  import org.apache.lucene.search.similarities.Similarity;

  import java.io.IOException;

 Added: 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/IntervalCollector.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/IntervalCollector.java?rev=1402078view=auto
 ==
 --- 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/IntervalCollector.java
  (added)
 +++ 
 lucene/dev/branches/LUCENE-2878/lucene/core/src/java/org/apache/lucene/search/positions/IntervalCollector.java
  Thu Oct 25 10:09:33 2012
 @@ -0,0 +1,43 @@
 +package org.apache.lucene.search.positions;
 +
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the License); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 

[jira] [Commented] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server

2012-10-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484556#comment-13484556
 ] 

Mark Miller commented on SOLR-3920:
---

Yeah, the caching is whack if you change up the collection list.

I've got a test and fix.

 CloudSolrServer doesn't allow to index multiple collections with one instance 
 of server
 ---

 Key: SOLR-3920
 URL: https://issues.apache.org/jira/browse/SOLR-3920
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Grzegorz Sobczyk
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0


 With one instance of CloudSolrServer I can't add documents to multiple 
 collections, for example:
 {code}
 @Test
 public void shouldSendToSecondCore() throws Exception {
   //given
   try {
   CloudSolrServer server = new CloudSolrServer(localhost:9983);
   UpdateRequest commit1 = new UpdateRequest();
   commit1.setAction(ACTION.COMMIT, true, true);
   commit1.setParam(collection, collection1);
   //this commit is bug's cause
   commit1.process(server);
   
   SolrInputDocument doc = new SolrInputDocument();
   doc.addField(id, id);
   doc.addField(name, name);
   
   UpdateRequest update2 = new UpdateRequest();
   update2.setParam(collection, collection2);
   update2.add(doc);
   update2.process(server);
   
   UpdateRequest commit2 = new UpdateRequest();
   commit2.setAction(ACTION.COMMIT, true, true);
   commit2.setParam(collection, collection2);
   commit2.process(server);
   SolrQuery q1 = new SolrQuery(id:id);
   q1.set(collection, collection1);
   SolrQuery q2 = new SolrQuery(id:id);
   q2.set(collection, collection2);
   
   //when
   QueryResponse resp1 = server.query(q1);
   QueryResponse resp2 = server.query(q2);
   
   //then
   Assert.assertEquals(0L, resp1.getResults().getNumFound());
   Assert.assertEquals(1L, resp2.getResults().getNumFound());
   } finally {
   CloudSolrServer server1 = new CloudSolrServer(localhost:9983);
   server1.setDefaultCollection(collection1);
   server1.deleteByQuery(id:id);
   server1.commit(true, true);
   
   CloudSolrServer server2 = new CloudSolrServer(localhost:9983);
   server2.setDefaultCollection(collection2);
   server2.deleteByQuery(id:id);
   server2.commit(true, true);
   }
 }
 {code}
 Second update goes to first collection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3975) Document Summarization toolkit, using LSA techniques

2012-10-25 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484559#comment-13484559
 ] 

Lance Norskog commented on SOLR-3975:
-

It's a first draft, not ready for committing. It needs strategies for 
controlling processing time, and code cleanups. I wanted to get it out for 
review before sinking even more time into it.

 Document Summarization toolkit, using LSA techniques
 

 Key: SOLR-3975
 URL: https://issues.apache.org/jira/browse/SOLR-3975
 Project: Solr
  Issue Type: New Feature
Reporter: Lance Norskog
Priority: Minor
 Attachments: 4.1.summary.patch, reuters.sh


 This package analyzes sentences and words as used across sentences to rank 
 the most important sentences and words. The general topic is called document 
 summarization and is a popular research topic in textual analysis. 
 How to use:
 1) Check out the 4.x branch, apply the patch, build, and run the solr/example 
 instance.
 2) Download the first Reuters article corpus from:
 http://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz
 3) Unpack this into a directory.
 4) Run the attached 'reuters.sh' script:
 sh reuters.sh directory http://localhost:8983/solr/collection1
 5) Wait several minutes.
 Now go to http://localhost:8983/solr/collection1/browse?summary=true and look 
 at the large gray box marked 'Document Summary'. This has a table of 
 statistics about the analysis, the three most important sentences, and 
 several of the most important words in the documents. The sentences have the 
 important words in italics.
 The code is packaged as a search component and as an analysis handler. The 
 /browse demo uses the search component, and you can also post raw text to  
 http://localhost:8983/solr/collection1/analysis/summary. Here is a sample 
 command:
 {code}
 curl -s 
 http://localhost:8983/solr/analysis/summary?indent=trueechoParams=allfile=$FILEwt=xml;
  --data-binary @$FILE -H 'Content-type:application/xml'
 {code}
 This is an implementation of LSA-based document summarization. A short 
 explanation and a long evaluation are described in my blog, [Uncle Lance's 
 Ultra Whiz Bang|http://ultrawhizbang.blogspot.com], starting here: 
 [http://ultrawhizbang.blogspot.com/2012/09/document-summarization-with-lsa-1.html]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server

2012-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3920:
--

Attachment: SOLR-3920.patch

 CloudSolrServer doesn't allow to index multiple collections with one instance 
 of server
 ---

 Key: SOLR-3920
 URL: https://issues.apache.org/jira/browse/SOLR-3920
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Grzegorz Sobczyk
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3920.patch


 With one instance of CloudSolrServer I can't add documents to multiple 
 collections, for example:
 {code}
 @Test
 public void shouldSendToSecondCore() throws Exception {
   //given
   try {
   CloudSolrServer server = new CloudSolrServer(localhost:9983);
   UpdateRequest commit1 = new UpdateRequest();
   commit1.setAction(ACTION.COMMIT, true, true);
   commit1.setParam(collection, collection1);
   //this commit is bug's cause
   commit1.process(server);
   
   SolrInputDocument doc = new SolrInputDocument();
   doc.addField(id, id);
   doc.addField(name, name);
   
   UpdateRequest update2 = new UpdateRequest();
   update2.setParam(collection, collection2);
   update2.add(doc);
   update2.process(server);
   
   UpdateRequest commit2 = new UpdateRequest();
   commit2.setAction(ACTION.COMMIT, true, true);
   commit2.setParam(collection, collection2);
   commit2.process(server);
   SolrQuery q1 = new SolrQuery(id:id);
   q1.set(collection, collection1);
   SolrQuery q2 = new SolrQuery(id:id);
   q2.set(collection, collection2);
   
   //when
   QueryResponse resp1 = server.query(q1);
   QueryResponse resp2 = server.query(q2);
   
   //then
   Assert.assertEquals(0L, resp1.getResults().getNumFound());
   Assert.assertEquals(1L, resp2.getResults().getNumFound());
   } finally {
   CloudSolrServer server1 = new CloudSolrServer(localhost:9983);
   server1.setDefaultCollection(collection1);
   server1.deleteByQuery(id:id);
   server1.commit(true, true);
   
   CloudSolrServer server2 = new CloudSolrServer(localhost:9983);
   server2.setDefaultCollection(collection2);
   server2.deleteByQuery(id:id);
   server2.commit(true, true);
   }
 }
 {code}
 Second update goes to first collection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.

2012-10-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484611#comment-13484611
 ] 

Mark Miller commented on SOLR-3939:
---

I've committed my latest work to 4x Joel - can you do a bit more testing with a 
recent checkout?

 An empty or just replicated index cannot become the leader of a shard after a 
 leader goes down.
 ---

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
Priority: Critical
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene/Solr 4.01 / 4.1

2012-10-25 Thread Mark Miller
I think we should start whatever our next release is very soon.

Given some of the SolrCloud issues, I'd like to do a 4.0.1 personally. But the 
issues are not data loss issues, so I could be convinced that 4.1 is fine. But 
I feel that means less people will move quickly from 4.0 in that case, and I 
don't like it.

I'm willing to pitch in on the release process. I don't know that I have the 
time (ApacheCon is coming up among other things) to do all of the work - but 
I'm happy to co-release-manage with anyone else willing to join in on a 4.0.1 
and/or 4.1 effort. I've love to see something come out in the first half of 
Nov. With ApacheCon, that probably means starting sooner rather than later. I'm 
busy trying to wrap up any important SolrCloud bug fixes.

Thoughts? Others who can dedicate some time to getting a release out?

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3933) Distributed commits are not guaranteed to be ordered within a request.

2012-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3933:
--

Fix Version/s: (was: 4.0.1)

 Distributed commits are not guaranteed to be ordered within a request.
 --

 Key: SOLR-3933
 URL: https://issues.apache.org/jira/browse/SOLR-3933
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3933.patch


 Update requests that also include a commit may do adds or deletes after the 
 commit - it's a race.
 This would most likely affect concurrent update server or bulk add methods - 
 but it's still a race for a single doc update or delete that includes a 
 commit as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3933) Distributed commits are not guaranteed to be ordered within a request.

2012-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3933.
---

Resolution: Fixed

Fix committed to 4X and 5X

 Distributed commits are not guaranteed to be ordered within a request.
 --

 Key: SOLR-3933
 URL: https://issues.apache.org/jira/browse/SOLR-3933
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3933.patch


 Update requests that also include a commit may do adds or deletes after the 
 commit - it's a race.
 This would most likely affect concurrent update server or bulk add methods - 
 but it's still a race for a single doc update or delete that includes a 
 commit as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server

2012-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3920.
---

Resolution: Fixed

committed to 4X and 5X

 CloudSolrServer doesn't allow to index multiple collections with one instance 
 of server
 ---

 Key: SOLR-3920
 URL: https://issues.apache.org/jira/browse/SOLR-3920
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Grzegorz Sobczyk
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3920.patch


 With one instance of CloudSolrServer I can't add documents to multiple 
 collections, for example:
 {code}
 @Test
 public void shouldSendToSecondCore() throws Exception {
   //given
   try {
   CloudSolrServer server = new CloudSolrServer(localhost:9983);
   UpdateRequest commit1 = new UpdateRequest();
   commit1.setAction(ACTION.COMMIT, true, true);
   commit1.setParam(collection, collection1);
   //this commit is bug's cause
   commit1.process(server);
   
   SolrInputDocument doc = new SolrInputDocument();
   doc.addField(id, id);
   doc.addField(name, name);
   
   UpdateRequest update2 = new UpdateRequest();
   update2.setParam(collection, collection2);
   update2.add(doc);
   update2.process(server);
   
   UpdateRequest commit2 = new UpdateRequest();
   commit2.setAction(ACTION.COMMIT, true, true);
   commit2.setParam(collection, collection2);
   commit2.process(server);
   SolrQuery q1 = new SolrQuery(id:id);
   q1.set(collection, collection1);
   SolrQuery q2 = new SolrQuery(id:id);
   q2.set(collection, collection2);
   
   //when
   QueryResponse resp1 = server.query(q1);
   QueryResponse resp2 = server.query(q2);
   
   //then
   Assert.assertEquals(0L, resp1.getResults().getNumFound());
   Assert.assertEquals(1L, resp2.getResults().getNumFound());
   } finally {
   CloudSolrServer server1 = new CloudSolrServer(localhost:9983);
   server1.setDefaultCollection(collection1);
   server1.deleteByQuery(id:id);
   server1.commit(true, true);
   
   CloudSolrServer server2 = new CloudSolrServer(localhost:9983);
   server2.setDefaultCollection(collection2);
   server2.deleteByQuery(id:id);
   server2.commit(true, true);
   }
 }
 {code}
 Second update goes to first collection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3932) SolrCmdDistributorTest either takes 3 seconds or 3 minutes.

2012-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3932.
---

Resolution: Fixed

 SolrCmdDistributorTest either takes 3 seconds or 3 minutes.
 ---

 Key: SOLR-3932
 URL: https://issues.apache.org/jira/browse/SOLR-3932
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: stack.txt


 I've looked into this a little in the past, but had not come to a conclusion. 
 It really bugs me because it doubles my test run time from 3 minutes to 6 
 minutes when it happens.
 I've been looking into it today and I think I've tracked the problem down to 
 mostly test bugs. One real bug around distrib commit ordering was also 
 uncovered.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 4.01 / 4.1

2012-10-25 Thread Robert Muir
On Thu, Oct 25, 2012 at 8:52 PM, Mark Miller markrmil...@gmail.com wrote:
 I think we should start whatever our next release is very soon.

 Given some of the SolrCloud issues, I'd like to do a 4.0.1 personally. But 
 the issues are not data loss issues, so I could be convinced that 4.1 is 
 fine. But I feel that means less people will move quickly from 4.0 in that 
 case, and I don't like it.

Again I disagree, but I'll make my argument mainly from a release
engineering perspective.

Its simple:
4.1 exists today, jenkins is kicking the shit out of it, if we made a
good RC it could maybe even pass.
4.0.1 does not yet exist!, i dont really see bugfixes backported to
lucene_solr_4_0_0 branch. If we made a flurry of backports, this would
likely create bugs that 4.1 doesnt have.

so 4.1 will be a more stable, more reliable release for these reasons.
It has nothing to do with how important bugs are or anything. creating
a 4.0.1 from scratch is work that i'm not interested in doing (even as
a bugfixer backporting bugs i have fixed, 4.1 is a more efficient
investment here).


 I'm willing to pitch in on the release process. I don't know that I have the 
 time (ApacheCon is coming up among other things) to do all of the work - but 
 I'm happy to co-release-manage with anyone else willing to join in on a 4.0.1 
 and/or 4.1 effort. I've love to see something come out in the first half of 
 Nov. With ApacheCon, that probably means starting sooner rather than later. 
 I'm busy trying to wrap up any important SolrCloud bug fixes.


I can help with a 4.1

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3561) Error during deletion of shard/core

2012-10-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484629#comment-13484629
 ] 

Mark Miller commented on SOLR-3561:
---

It's very likely this could have been SOLR-3939.

 Error during deletion of shard/core
 ---

 Key: SOLR-3561
 URL: https://issues.apache.org/jira/browse/SOLR-3561
 Project: Solr
  Issue Type: Bug
  Components: multicore, replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012
Reporter: Per Steffensen
Assignee: Mark Miller
 Fix For: 4.1, 5.0


 Running several Solr servers in Cloud-cluster (zkHost set on the Solr 
 servers).
 Several collections with several slices and one replica for each slice (each 
 slice has two shards)
 Basically we want let our system delete an entire collection. We do this by 
 trying to delete each and every shard under the collection. Each shard is 
 deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant 
 Solr
 {code}
 CoreAdminRequest request = new CoreAdminRequest();
 request.setAction(CoreAdminAction.UNLOAD);
 request.setCoreName(shardName);
 CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl));
 {code}
 The delete/unload succeeds, but in like 10% of the cases we get errors on 
 involved Solr servers, right around the time where shard/cores are deleted, 
 and we end up in a situation where ZK still claims (forever) that the deleted 
 shard is still present and active.
 Form here the issue is easilier explained by a more concrete example:
 - 7 Solr servers involved
 - Several collection a.o. one called collection_2012_04, consisting of 28 
 slices, 56 shards (remember 1 replica for each slice) named 
 collection_2012_04_sliceX_shardY for all pairs in {X:1..28}x{Y:1,2}
 - Each Solr server running 8 shards, e.g Solr server #1 is running shard 
 collection_2012_04_slice1_shard1 and Solr server #7 is running shard 
 collection_2012_04_slice1_shard2 belonging to the same slice slice1.
 When we decide to delete the collection collection_2012_04 we go through 
 all 56 shards and delete/unload them one-by-one - including 
 collection_2012_04_slice1_shard1 and collection_2012_04_slice1_shard2. At 
 some point during or shortly after all this deletion we see the following 
 exceptions in solr.log on Solr server #7
 {code}
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: 
 core not found:collection_2012_04_slice1_shard1
 request: 
 http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERYcore=collection_2012_04_slice1_shard1nodeName=solr_server_7%3A8983_solrcoreNodeName=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2state=recoveringcheckLive=truepauseFor=6000wt=javabinversion=2
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264)
 at 
 org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188)
 at 
 org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206)
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Recovery failed - trying again...
 Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
 WARNING:
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:96)
 at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:57)
 at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:121)
 at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
 at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:507)
 Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
 {code}
 Im not sure exactly how to interpret this, but it seems to me that some 
 recovery job tries to recover collection_2012_04_slice1_shard2 on Solr server 
 #7 from collection_2012_04_slice1_shard1 on Solr server #1, 

Re: Lucene/Solr 4.01 / 4.1

2012-10-25 Thread Mark Miller
In my case, all the important bug fixes were only just recently fixed or I'm 
still fixing them - so for my stuff, I see a larger negative with 4.1 vs 4.0.1. 
They won't bake long in either version - but they should go out soon regardless.

In any case, regardless of the opinion about whether 4.1 really would be more 
stable than a 4.0.1 (I think it could be argued even in the time to bake case), 
I don't believe users will react with that line of reasoning on average.

I think the best way to get users to upgrade and avoid some nasty bugs is to 
label something 4.0.1.

Like I said, when you see a 4 go to 4.0.1, for most, it's a no brainer to 
upgrade. In fact, you normally assume you should - bugs must have been fixed - 
potentially bad ones. At worst it gets you to read the changes. Now when a 4.1 
comes out, that's a feature release. That feels more dangerous (regardless of 
reality). That's something perhaps I'll think about in a few months when things 
slow down - or not at all. That's the type of thing we have changed runtime 
behavior in before - or made back compat break calls. Even with the new 
development style, that stuff will come up. And even if it didn't, it's just 
how people think about software and versions in general - and we are not easily 
going to change that - IMO it's best to use it.


- Mark

On Oct 25, 2012, at 9:02 PM, Robert Muir rcm...@gmail.com wrote:

 On Thu, Oct 25, 2012 at 8:52 PM, Mark Miller markrmil...@gmail.com wrote:
 I think we should start whatever our next release is very soon.
 
 Given some of the SolrCloud issues, I'd like to do a 4.0.1 personally. But 
 the issues are not data loss issues, so I could be convinced that 4.1 is 
 fine. But I feel that means less people will move quickly from 4.0 in that 
 case, and I don't like it.
 
 Again I disagree, but I'll make my argument mainly from a release
 engineering perspective.
 
 Its simple:
 4.1 exists today, jenkins is kicking the shit out of it, if we made a
 good RC it could maybe even pass.
 4.0.1 does not yet exist!, i dont really see bugfixes backported to
 lucene_solr_4_0_0 branch. If we made a flurry of backports, this would
 likely create bugs that 4.1 doesnt have.
 
 so 4.1 will be a more stable, more reliable release for these reasons.
 It has nothing to do with how important bugs are or anything. creating
 a 4.0.1 from scratch is work that i'm not interested in doing (even as
 a bugfixer backporting bugs i have fixed, 4.1 is a more efficient
 investment here).


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4504) Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues

2012-10-25 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4504:


Attachment: LUCENE-4504.patch

Thanks for the test Tom! 

I rolled both these into a combined patch: I'll commit soon.

 Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues
 -

 Key: LUCENE-4504
 URL: https://issues.apache.org/jira/browse/LUCENE-4504
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.0
Reporter: TomShally
Priority: Minor
 Attachments: LUCENE-4504.patch, LUCENE-4504.patch, Lucene4504Test.java


 IS.searchAfter() always returns an empty result when using FunctionValues for 
 sorting.
 The culprit is ValueSourceComparator.compareDocToValue() returning -1 when it 
 should return +1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3994) Create more extensive tests around unloading cores.

2012-10-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3994:
-

 Summary: Create more extensive tests around unloading cores.
 Key: SOLR-3994
 URL: https://issues.apache.org/jira/browse/SOLR-3994
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.1, 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 4.01 / 4.1

2012-10-25 Thread Robert Muir
On Thu, Oct 25, 2012 at 9:47 PM, Mark Miller markrmil...@gmail.com wrote:
 In my case, all the important bug fixes were only just recently fixed or I'm 
 still fixing them - so for my stuff, I see a larger negative with 4.1 vs 
 4.0.1. They won't bake long in either version - but they should go out soon 
 regardless.


This can be easily mitigated: just commit to trunk and spin up an
extra jenkins against it. But 4.1 is already stable on the lucene side
and I don't think we should go backwards.

There is just a lot of little shit, like javadocs fixes, improvements
to the build, etc that would make it a higher quality release. We also
have enough features already to make it a real release
(http://wiki.apache.org/lucene-java/ReleaseNote41).

I'm not really worried about playing tricks trying to convince users
to upgrade, I think we should just focus on quality releases and that
comes naturally.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.

2012-10-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484652#comment-13484652
 ] 

Joel Bernstein commented on SOLR-3939:
--

I ran the Oct 14th test and the leader election worked perfectly. Then I tested 
shutting down the leader VM instead of unloading the loader core and this 
worked fine.  

Then I tried a leader with two replicas that had both just been replicated to. 
When I unloaded the leader neither replica became leader. But this was the case 
that was not yet accounted for I believe.

I can't think of a use case where the second scenario would happen though.

The first scenario though is critical for migrating micro-shards, so it's great 
that you committed this.

Thanks for your work on this issue.

Joel






 An empty or just replicated index cannot become the leader of a shard after a 
 leader goes down.
 ---

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
Priority: Critical
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4504) Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues

2012-10-25 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4504.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.1

Thanks again Tom!

 Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues
 -

 Key: LUCENE-4504
 URL: https://issues.apache.org/jira/browse/LUCENE-4504
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.0
Reporter: TomShally
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4504.patch, LUCENE-4504.patch, Lucene4504Test.java


 IS.searchAfter() always returns an empty result when using FunctionValues for 
 sorting.
 The culprit is ValueSourceComparator.compareDocToValue() returning -1 when it 
 should return +1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4506) Fix smoketester to not run checkJavadocsLinks.py across java6-generated javadocs

2012-10-25 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-4506:
---

 Summary: Fix smoketester to not run checkJavadocsLinks.py across 
java6-generated javadocs
 Key: LUCENE-4506
 URL: https://issues.apache.org/jira/browse/LUCENE-4506
 Project: Lucene - Core
  Issue Type: Task
  Components: general/test
Reporter: Robert Muir


Currently smokeTester (ant nightly-smoke) fails, because it invokes
this python script directly and the javadocs checker is more picky.

However, java6's javadocs generates hopelessly broken html.

We should fix it to only do this across the java7-generated javadocs and get 
smokeTesting passing again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4506) Fix smoketester to not run checkJavadocsLinks.py across java6-generated javadocs

2012-10-25 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484669#comment-13484669
 ] 

Robert Muir commented on LUCENE-4506:
-

Also:
* we only run lucene's demo with java6 (solr's example is correctly tested with 
both java6 and java7)
* we only run lucene's tests with java6
* we only run lucene's javadocs with java6.

I'm testing up a patch to fix all of this: for the javadocs case the idea is to 
a degraded verification of javadocs
for java6, but full checks for java7.

 Fix smoketester to not run checkJavadocsLinks.py across java6-generated 
 javadocs
 

 Key: LUCENE-4506
 URL: https://issues.apache.org/jira/browse/LUCENE-4506
 Project: Lucene - Core
  Issue Type: Task
  Components: general/test
Reporter: Robert Muir

 Currently smokeTester (ant nightly-smoke) fails, because it invokes
 this python script directly and the javadocs checker is more picky.
 However, java6's javadocs generates hopelessly broken html.
 We should fix it to only do this across the java7-generated javadocs and get 
 smokeTesting passing again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3994) Create more extensive tests around unloading cores.

2012-10-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484671#comment-13484671
 ] 

Mark Miller commented on SOLR-3994:
---

I wrote a test that fires up N cores (doing 20 at the moment) fairly 
concurrently - and then unloads them all fairly concurrently.

This found a variety of minor nit issues and one more major issue - dead lock 
around shutting down cores - a solrcore might be closed in the recovery thread 
which may then trigger a cancel recovery that can never finish because it's 
being called from the recovery thread.

Fixed the nits, fixed that issue, will create a JIRA for it and commit that fix 
with the nit fixes.


 Create more extensive tests around unloading cores.
 ---

 Key: SOLR-3994
 URL: https://issues.apache.org/jira/browse/SOLR-3994
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.1, 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.

2012-10-25 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484683#comment-13484683
 ] 

Yonik Seeley commented on SOLR-3939:


bq. Isn't that what capturing the starting versions is all about?

For a node starting up, yeah.  For a leader syncing to someone else - I don't 
think it should matter.

bq. but if you want to peer sync from the leader to a replica that is coming 
back up, if updates are coming in, you are going to force a replication anyway. 

If updates were coming in fast enough during the bounce... I guess so.

 An empty or just replicated index cannot become the leader of a shard after a 
 leader goes down.
 ---

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
Priority: Critical
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3938) prepareCommit command omits commitData

2012-10-25 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-3938.


   Resolution: Fixed
Fix Version/s: (was: 4.0.1)
   4.1

committed to 4x / trunk

 prepareCommit command omits commitData
 --

 Key: SOLR-3938
 URL: https://issues.apache.org/jira/browse/SOLR-3938
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Yonik Seeley
  Labels: 4.0.1_Candidate
 Fix For: 4.1

 Attachments: SOLR-3938.patch


 Solr's prepareCommit doesn't set any commitData, and then when a commit is 
 done, it's too late.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4506) Fix smoketester to not run checkJavadocsLinks.py across java6-generated javadocs

2012-10-25 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4506:


Attachment: LUCENE-4506.patch

here's the current patch after a few iterations. its currently testing...

 Fix smoketester to not run checkJavadocsLinks.py across java6-generated 
 javadocs
 

 Key: LUCENE-4506
 URL: https://issues.apache.org/jira/browse/LUCENE-4506
 Project: Lucene - Core
  Issue Type: Task
  Components: general/test
Reporter: Robert Muir
 Attachments: LUCENE-4506.patch


 Currently smokeTester (ant nightly-smoke) fails, because it invokes
 this python script directly and the javadocs checker is more picky.
 However, java6's javadocs generates hopelessly broken html.
 We should fix it to only do this across the java7-generated javadocs and get 
 smokeTesting passing again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4506) Fix smoketester to not run checkJavadocsLinks.py across java6-generated javadocs

2012-10-25 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4506.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.1

 Fix smoketester to not run checkJavadocsLinks.py across java6-generated 
 javadocs
 

 Key: LUCENE-4506
 URL: https://issues.apache.org/jira/browse/LUCENE-4506
 Project: Lucene - Core
  Issue Type: Task
  Components: general/test
Reporter: Robert Muir
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4506.patch


 Currently smokeTester (ant nightly-smoke) fails, because it invokes
 this python script directly and the javadocs checker is more picky.
 However, java6's javadocs generates hopelessly broken html.
 We should fix it to only do this across the java7-generated javadocs and get 
 smokeTesting passing again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3995) Recovery may never finish on SolrCore shutdown if the last reference to a SolrCore is closed by the recovery process.

2012-10-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3995:
-

 Summary: Recovery may never finish on SolrCore shutdown if the 
last reference to a SolrCore is closed by the recovery process.
 Key: SOLR-3995
 URL: https://issues.apache.org/jira/browse/SOLR-3995
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.1, 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3561) Error during deletion of shard/core

2012-10-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484711#comment-13484711
 ] 

Mark Miller commented on SOLR-3561:
---

And/Or SOLR-3994

 Error during deletion of shard/core
 ---

 Key: SOLR-3561
 URL: https://issues.apache.org/jira/browse/SOLR-3561
 Project: Solr
  Issue Type: Bug
  Components: multicore, replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012
Reporter: Per Steffensen
Assignee: Mark Miller
 Fix For: 4.1, 5.0


 Running several Solr servers in Cloud-cluster (zkHost set on the Solr 
 servers).
 Several collections with several slices and one replica for each slice (each 
 slice has two shards)
 Basically we want let our system delete an entire collection. We do this by 
 trying to delete each and every shard under the collection. Each shard is 
 deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant 
 Solr
 {code}
 CoreAdminRequest request = new CoreAdminRequest();
 request.setAction(CoreAdminAction.UNLOAD);
 request.setCoreName(shardName);
 CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl));
 {code}
 The delete/unload succeeds, but in like 10% of the cases we get errors on 
 involved Solr servers, right around the time where shard/cores are deleted, 
 and we end up in a situation where ZK still claims (forever) that the deleted 
 shard is still present and active.
 Form here the issue is easilier explained by a more concrete example:
 - 7 Solr servers involved
 - Several collection a.o. one called collection_2012_04, consisting of 28 
 slices, 56 shards (remember 1 replica for each slice) named 
 collection_2012_04_sliceX_shardY for all pairs in {X:1..28}x{Y:1,2}
 - Each Solr server running 8 shards, e.g Solr server #1 is running shard 
 collection_2012_04_slice1_shard1 and Solr server #7 is running shard 
 collection_2012_04_slice1_shard2 belonging to the same slice slice1.
 When we decide to delete the collection collection_2012_04 we go through 
 all 56 shards and delete/unload them one-by-one - including 
 collection_2012_04_slice1_shard1 and collection_2012_04_slice1_shard2. At 
 some point during or shortly after all this deletion we see the following 
 exceptions in solr.log on Solr server #7
 {code}
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: 
 core not found:collection_2012_04_slice1_shard1
 request: 
 http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERYcore=collection_2012_04_slice1_shard1nodeName=solr_server_7%3A8983_solrcoreNodeName=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2state=recoveringcheckLive=truepauseFor=6000wt=javabinversion=2
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264)
 at 
 org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188)
 at 
 org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206)
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Recovery failed - trying again...
 Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
 WARNING:
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:96)
 at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:57)
 at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:121)
 at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
 at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:507)
 Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
 {code}
 Im not sure exactly how to interpret this, but it seems to me that some 
 recovery job tries to recover collection_2012_04_slice1_shard2 on Solr server 
 #7 from collection_2012_04_slice1_shard1 on Solr server #1, but fail because 
 Solr server #1 

[jira] [Commented] (SOLR-3561) Error during deletion of shard/core

2012-10-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484712#comment-13484712
 ] 

Mark Miller commented on SOLR-3561:
---

{noformat}
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:571)
at java.util.ArrayList.get(ArrayList.java:349)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:95)
{noformat}

This is actually likely find and unrelated - it's something that can happen on 
shutdown and should not be a problem. I've updated it so that a more 
appropriate message is logged.

 Error during deletion of shard/core
 ---

 Key: SOLR-3561
 URL: https://issues.apache.org/jira/browse/SOLR-3561
 Project: Solr
  Issue Type: Bug
  Components: multicore, replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012
Reporter: Per Steffensen
Assignee: Mark Miller
 Fix For: 4.1, 5.0


 Running several Solr servers in Cloud-cluster (zkHost set on the Solr 
 servers).
 Several collections with several slices and one replica for each slice (each 
 slice has two shards)
 Basically we want let our system delete an entire collection. We do this by 
 trying to delete each and every shard under the collection. Each shard is 
 deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant 
 Solr
 {code}
 CoreAdminRequest request = new CoreAdminRequest();
 request.setAction(CoreAdminAction.UNLOAD);
 request.setCoreName(shardName);
 CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl));
 {code}
 The delete/unload succeeds, but in like 10% of the cases we get errors on 
 involved Solr servers, right around the time where shard/cores are deleted, 
 and we end up in a situation where ZK still claims (forever) that the deleted 
 shard is still present and active.
 Form here the issue is easilier explained by a more concrete example:
 - 7 Solr servers involved
 - Several collection a.o. one called collection_2012_04, consisting of 28 
 slices, 56 shards (remember 1 replica for each slice) named 
 collection_2012_04_sliceX_shardY for all pairs in {X:1..28}x{Y:1,2}
 - Each Solr server running 8 shards, e.g Solr server #1 is running shard 
 collection_2012_04_slice1_shard1 and Solr server #7 is running shard 
 collection_2012_04_slice1_shard2 belonging to the same slice slice1.
 When we decide to delete the collection collection_2012_04 we go through 
 all 56 shards and delete/unload them one-by-one - including 
 collection_2012_04_slice1_shard1 and collection_2012_04_slice1_shard2. At 
 some point during or shortly after all this deletion we see the following 
 exceptions in solr.log on Solr server #7
 {code}
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: 
 core not found:collection_2012_04_slice1_shard1
 request: 
 http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERYcore=collection_2012_04_slice1_shard1nodeName=solr_server_7%3A8983_solrcoreNodeName=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2state=recoveringcheckLive=truepauseFor=6000wt=javabinversion=2
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264)
 at 
 org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188)
 at 
 org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206)
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Recovery failed - trying again...
 Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
 WARNING:
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:96)
 at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:57)
 at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:121)
 at 
 

[jira] [Comment Edited] (SOLR-3561) Error during deletion of shard/core

2012-10-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484712#comment-13484712
 ] 

Mark Miller edited comment on SOLR-3561 at 10/26/12 4:58 AM:
-

{noformat}
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:571)
at java.util.ArrayList.get(ArrayList.java:349)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:95)
{noformat}

This is actually likely fine and unrelated - it's something that can happen on 
shutdown and should not be a problem. I've updated it so that a more 
appropriate message is logged.

  was (Author: markrmil...@gmail.com):
{noformat}
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:571)
at java.util.ArrayList.get(ArrayList.java:349)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:95)
{noformat}

This is actually likely find and unrelated - it's something that can happen on 
shutdown and should not be a problem. I've updated it so that a more 
appropriate message is logged.
  
 Error during deletion of shard/core
 ---

 Key: SOLR-3561
 URL: https://issues.apache.org/jira/browse/SOLR-3561
 Project: Solr
  Issue Type: Bug
  Components: multicore, replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012
Reporter: Per Steffensen
Assignee: Mark Miller
 Fix For: 4.1, 5.0


 Running several Solr servers in Cloud-cluster (zkHost set on the Solr 
 servers).
 Several collections with several slices and one replica for each slice (each 
 slice has two shards)
 Basically we want let our system delete an entire collection. We do this by 
 trying to delete each and every shard under the collection. Each shard is 
 deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant 
 Solr
 {code}
 CoreAdminRequest request = new CoreAdminRequest();
 request.setAction(CoreAdminAction.UNLOAD);
 request.setCoreName(shardName);
 CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl));
 {code}
 The delete/unload succeeds, but in like 10% of the cases we get errors on 
 involved Solr servers, right around the time where shard/cores are deleted, 
 and we end up in a situation where ZK still claims (forever) that the deleted 
 shard is still present and active.
 Form here the issue is easilier explained by a more concrete example:
 - 7 Solr servers involved
 - Several collection a.o. one called collection_2012_04, consisting of 28 
 slices, 56 shards (remember 1 replica for each slice) named 
 collection_2012_04_sliceX_shardY for all pairs in {X:1..28}x{Y:1,2}
 - Each Solr server running 8 shards, e.g Solr server #1 is running shard 
 collection_2012_04_slice1_shard1 and Solr server #7 is running shard 
 collection_2012_04_slice1_shard2 belonging to the same slice slice1.
 When we decide to delete the collection collection_2012_04 we go through 
 all 56 shards and delete/unload them one-by-one - including 
 collection_2012_04_slice1_shard1 and collection_2012_04_slice1_shard2. At 
 some point during or shortly after all this deletion we see the following 
 exceptions in solr.log on Solr server #7
 {code}
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: 
 core not found:collection_2012_04_slice1_shard1
 request: 
 http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERYcore=collection_2012_04_slice1_shard1nodeName=solr_server_7%3A8983_solrcoreNodeName=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2state=recoveringcheckLive=truepauseFor=6000wt=javabinversion=2
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264)
 at 
 org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188)
 at 
 org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206)
 Aug 1, 2012 12:02:50 AM 

[jira] [Resolved] (SOLR-3992) QuerySenderListener doesn't populate document cache

2012-10-25 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-3992.


   Resolution: Fixed
Fix Version/s: 4.1

Committed fix to 4x and trunk. Thanks Shotaro!

 QuerySenderListener doesn't populate document cache
 ---

 Key: SOLR-3992
 URL: https://issues.apache.org/jira/browse/SOLR-3992
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0
Reporter: Shotaro Kamio
 Fix For: 4.1


 QuerySenderListner class can be used to populate cache on startup of solr 
 (firstSearcher event). The code looks trying to populate document cache also. 
 But it doesn't.
 {code}
 NamedList values = rsp.getValues();
 for (int i=0; ivalues.size(); i++) {
   Object o = values.getVal(i);
   if (o instanceof DocList) {
 {code}
 It is because value of response object uses ResultContext object to store 
 document list, not DocList object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3985) Allow ExternalFileField caches to be reloaded on newSearcher and firstSearcher events

2012-10-25 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484725#comment-13484725
 ] 

Yonik Seeley commented on SOLR-3985:


Hey Alan, I think this looks fine.
Could we perhaps remove the Copyright (c) 2012 Lemur Consulting Ltd.?

 Allow ExternalFileField caches to be reloaded on newSearcher and 
 firstSearcher events
 -

 Key: SOLR-3985
 URL: https://issues.apache.org/jira/browse/SOLR-3985
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-3985.patch


 At the moment, ExternalFileField caches can only be refreshed/reloaded by 
 clearing them entirely, which forces a reload the next time they are used in 
 a query.  If your external files are big, this can take unacceptably long.
 Instead, we should allow the caches to be loaded on newSearcher/firstSearcher 
 events, running in the background.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b58) - Build # 1991 - Failure!

2012-10-25 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux/1991/
Java: 64bit/jdk1.8.0-ea-b58 -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 9082 lines...]
[junit4:junit4] ERROR: JVM J1 ended with an exception, command line: 
/mnt/ssd/jenkins/tools/java/64bit/jdk1.8.0-ea-b58/jre/bin/java 
-XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps 
-Dtests.prefix=tests -Dtests.seed=CF61FCF0C073B92E -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false 
-Dtests.lockdir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build 
-Dtests.codec=random -Dtests.postingsformat=random -Dtests.locale=random 
-Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.1 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/testlogging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=. 
-Djava.io.tmpdir=. 
-Dtests.sandbox.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core
 
-Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.1-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -classpath