Re: [VOTE] Release PyLucene 4.7.1-1

2014-04-10 Thread Thomas Koch
Am 09.04.2014 um 22:35 schrieb Andi Vajda va...@apache.org:

 I think all of these were covered last week on this list, or was it off list ?
 
Hi,

yes - sorry, just missed that discussion - it’s here: 
http://mail-archives.apache.org/mod_mbox/lucene-pylucene-dev/201403.mbox/date

 Anyway:
  - you must use the same compiler used to build python to build extensions 
 for it - this may imply building python from sources
  - you must ensure that the desired version of the java libraries and header 
 files are picked up: setting JAVA_HOME correctly and/or the relevant 
 variables in JCC's setup.py
  - you should use the compiler and linker Apple Xcode command line tools (a 
 separate install) 
 
 Andi..

In deed my python27 (stock MacOS X bundle) was built with GCC:
 Python 2.7.5 (default, Aug 25 2013, 00:04:04) 
 [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin

and I was using clang for the build process. 

I was then able to build JCC 2.19 and pylucene4.7.-1 with gcc and these 
settings:

export ARCHFLAGS=-arch x86_64
export CC=/usr/local/bin/gcc-4.2

Note that I installed gcc via homebrew (I didn’t want to try the alternative to 
build python with clang ...)

Finally , 'make test‘ passes!

+1 

regards,
Thomas
--
Am 09.04.2014 um 22:35 schrieb Andi Vajda va...@apache.org:

 I think all of these were covered last week on this list, or was it offlist ?
 
 Anyway:
  - you must use the same compiler used to build python to build extensions 
 for it - this may imply building python from sources
  - you must ensure that the desired version of the java libraries and header 
 files are picked up: setting JAVA_HOME correctly and/or the relevant 
 variables in JCC's setup.py
  - you should use the compiler and linker Apple Xcode command line tools (a 
 separate install) 
 
 Andi..



Re: problem in using distanceFilter in booleanFilter (using FilterClause)

2014-04-10 Thread kumaran
Hi All,

i am trying to add Termfilter and DistanceFilter in BooleanFilter using
FilterClause. But i am getting the below mentioned error. Please check my
code and guide me.




*Code:*

 DistanceQueryBuilder queryBuilder = new DistanceQueryBuilder(latLong[0],
 latLong[1], radius, lat, lon, CartesianTierPlotter.DEFALT_FIELD_PREFIX,
 true);
 DistanceFieldComparatorSource distComp = new
 DistanceFieldComparatorSource(queryBuilder.getDistanceFilter());
 Sort distSort = new Sort(new SortField(, distComp,true));
 QueryParser parser = new QueryParser(Version.LUCENE_30, city,
 new StandardAnalyzer(Version.LUCENE_30));
 Query query = parser.parse(strQuery);
 System.out.println( distance sort details ::: + distSort);
 BooleanFilter boolFilter = new BooleanFilter();
 FilterClause filterClause2 = new
 FilterClause(queryBuilder.getFilter(), BooleanClause.Occur.MUST);
 boolFilter.add(filterClause2);

 Term term = new Term(city, chengalpat);
 TermsFilter filter = new TermsFilter();
 filter.addTerm(term);
 FilterClause filterClause = new FilterClause(filter,
 BooleanClause.Occur.SHOULD);
 boolFilter.add(filterClause);

 TopDocs topDocs = searcher.search(query,boolFilter, 20,distSort);



*ErrorTrace:*

 java.lang.NullPointerException at
 org.apache.lucene.spatial.tier.DistanceFieldComparatorSource$DistanceScoreDocLookupComparator.copy(DistanceFieldComparatorSource.java:105)
 at
 org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.collect(TopFieldCollector.java:89)
 at
 org.apache.lucene.search.IndexSearcher.searchWithFilter(IndexSearcher.java:258)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:218) at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:199) at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:177) at
 org.apache.lucene.search.Searcher.search(Searcher.java:49) at
 com.zoho.training.RadialSearch.search(RadialSearch.java:246) at
 com.zoho.training.RadialSearch.main(RadialSearch.java:281) Exception in
 thread main java.lang.NullPointerException at
 org.apache.lucene.spatial.tier.DistanceFieldComparatorSource$DistanceScoreDocLookupComparator.copy(DistanceFieldComparatorSource.java:105)
 at
 org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.collect(TopFieldCollector.java:89)
 at
 org.apache.lucene.search.IndexSearcher.searchWithFilter(IndexSearcher.java:258)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:218) at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:199) at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:177) at
 org.apache.lucene.search.Searcher.search(Searcher.java:49) at
 com.zoho.training.RadialSearch.search(RadialSearch.java:246) at
 com.zoho.training.RadialSearch.main(RadialSearch.java:281)





Kumaran R


Re: [VOTE] Lucene/Solr 4.7.2

2014-04-10 Thread Robert Muir
On Wed, Apr 9, 2014 at 1:01 PM, Uwe Schindler u...@thetaphi.de wrote:
 Hi,

 one issue with the branch:
 Steve Rowe updated the version numbers in common-build.xml to 4.7.1-dev 
 (which is not needed and should not be done, the release branch should have 
 4.7-dev for its lifetime). The official version is set on building the 
 release artifacts.
 Because of this update, to be consistent, we should change version numbers to 
 4.7.2-dev (or better: revert to 4.7-dev).

 This is not an issue for respin, but if we do, the versions should be 
 consistent.


I will take care of this.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1138: POMs out of sync

2014-04-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1138/

No tests ran.

Build Log:
[...truncated 27929 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:483: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:164: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build.xml:493:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2002:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1453:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:530:
 Error deploying artifact 'org.apache.lucene:lucene-highlighter:jar': Error 
retrieving previous build number for artifact 
'org.apache.lucene:lucene-highlighter:jar': repository metadata for: 'snapshot 
org.apache.lucene:lucene-highlighter:5.0-SNAPSHOT' could not be retrieved from 
repository: apache.snapshots.https due to an error: Error transferring file: 
Server returned HTTP response code: 502 for URL: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-highlighter/5.0-SNAPSHOT/maven-metadata.xml

Total time: 18 minutes 50 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5975) NullPointerException in StatsComponent when field is of type Date

2014-04-10 Thread Elran Dvir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965079#comment-13965079
 ] 

Elran Dvir commented on SOLR-5975:
--

I am using 4.4.
I am sorry for the duplication. I wasn't aware it's a known issue.
Thanks.

 NullPointerException in StatsComponent when field is of type Date 
 --

 Key: SOLR-5975
 URL: https://issues.apache.org/jira/browse/SOLR-5975
 Project: Solr
  Issue Type: Bug
Reporter: Elran Dvir
 Attachments: SOLR-5975.patch


 For a stats distributed query on a date field, a null pointer exception is 
 thrown if there aren't any docs matching the query in one of the shards.
 In this case there won't be values sum and sumOfSquares in shard response, 
 and we will get null pointer exception when we try to read them in 
 updateTypeSpecificStats in accumulate.
 A patch fixing it is attached.
 the full exception stack trace:
 java.lang.NullPointerException at 
 org.apache.solr.handler.component.DateStatsValues.updateTypeSpecificStats(StatsValuesFactory.java:484)
  at 
 org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:128)
  at 
 org.apache.solr.handler.component.StatsComponent.handleResponses(StatsComponent.java:121)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904) at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474)
  at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) 
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
  at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
  at org.eclipse.jetty.server.Server.handle(Server.java:370) at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
  at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
  at java.lang.Thread.run(Thread.java:804)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5583) Add DataInput.skipBytes

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965080#comment-13965080
 ] 

ASF subversion and git services commented on LUCENE-5583:
-

Commit 1586231 from jpou...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1586231 ]

LUCENE-5583: Add DataInput.skipBytes, ChecksumIndexInput can now seek forward.

 Add DataInput.skipBytes
 ---

 Key: LUCENE-5583
 URL: https://issues.apache.org/jira/browse/LUCENE-5583
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Blocker
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5583.patch, LUCENE-5583.patch


 I was playing with on-the-fly checksum verification and this made me stumble 
 upon an issue with {{BufferedChecksumIndexInput}}.
 I have some code that skips over a {{DataInput}} by reading bytes into 
 /dev/null, eg.
 {code}
   private static final byte[] SKIP_BUFFER = new byte[1024];
   private static void skipBytes(DataInput in, long numBytes) throws 
 IOException {
 assert numBytes = 0;
 for (long skipped = 0; skipped  numBytes; ) {
   final int toRead = (int) Math.min(numBytes - skipped, 
 SKIP_BUFFER.length);
   in.readBytes(SKIP_BUFFER, 0, toRead);
   skipped += toRead;
 }
   }
 {code}
 It is fine to read into this static buffer, even from multiple threads, since 
 the content that is read doesn't matter here. However, it breaks with 
 {{BufferedChecksumIndexInput}} because of the way that it updates the 
 checksum:
 {code}
   @Override
   public void readBytes(byte[] b, int offset, int len)
 throws IOException {
 main.readBytes(b, offset, len);
 digest.update(b, offset, len);
   }
 {code}
 If you are unlucky enough so that a concurrent call to {{skipBytes}} started 
 modifying the content of {{b}} before the call to {{digest.update(b, offset, 
 len)}} finished, then your checksum will be wrong.
 I think we should make {{BufferedChecksumIndexInput}} read into a private 
 buffer first instead of relying on the user-provided buffer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5975) NullPointerException in StatsComponent when field is of type Date

2014-04-10 Thread Elran Dvir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elran Dvir resolved SOLR-5975.
--

Resolution: Duplicate

 NullPointerException in StatsComponent when field is of type Date 
 --

 Key: SOLR-5975
 URL: https://issues.apache.org/jira/browse/SOLR-5975
 Project: Solr
  Issue Type: Bug
Reporter: Elran Dvir
 Attachments: SOLR-5975.patch


 For a stats distributed query on a date field, a null pointer exception is 
 thrown if there aren't any docs matching the query in one of the shards.
 In this case there won't be values sum and sumOfSquares in shard response, 
 and we will get null pointer exception when we try to read them in 
 updateTypeSpecificStats in accumulate.
 A patch fixing it is attached.
 the full exception stack trace:
 java.lang.NullPointerException at 
 org.apache.solr.handler.component.DateStatsValues.updateTypeSpecificStats(StatsValuesFactory.java:484)
  at 
 org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:128)
  at 
 org.apache.solr.handler.component.StatsComponent.handleResponses(StatsComponent.java:121)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904) at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474)
  at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) 
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
  at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
  at org.eclipse.jetty.server.Server.handle(Server.java:370) at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
  at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
  at java.lang.Thread.run(Thread.java:804)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1138: POMs out of sync

2014-04-10 Thread Adrien Grand
On Thu, Apr 10, 2014 at 8:48 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:530:
  Error deploying artifact 'org.apache.lucene:lucene-highlighter:jar': Error 
 retrieving previous build number for artifact 
 'org.apache.lucene:lucene-highlighter:jar': repository metadata for: 
 'snapshot org.apache.lucene:lucene-highlighter:5.0-SNAPSHOT' could not be 
 retrieved from repository: apache.snapshots.https due to an error: Error 
 transferring file: Server returned HTTP response code: 502 for URL: 
 https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-highlighter/5.0-SNAPSHOT/maven-metadata.xml

Looks like it was a temporary error, this link seems to work now.

-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #625: POMs out of sync

2014-04-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/625/

No tests ran.

Build Log:
[...truncated 28436 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:483: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:164: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/solr/build.xml:575:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/lucene/common-build.xml:1454:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/lucene/common-build.xml:531:
 Error deploying artifact 'org.apache.solr:solr-test-framework:jar': Error 
retrieving previous build number for artifact 
'org.apache.solr:solr-test-framework:jar': repository metadata for: 'snapshot 
org.apache.solr:solr-test-framework:4.8-SNAPSHOT' could not be retrieved from 
repository: apache.snapshots.https due to an error: Error transferring file: 
Server returned HTTP response code: 503 for URL: 
https://repository.apache.org/content/repositories/snapshots/org/apache/solr/solr-test-framework/4.8-SNAPSHOT/maven-metadata.xml.sha1

Total time: 21 minutes 42 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Extending pagination using cursorMark

2014-04-10 Thread Vanlerberghe, Luc
In Solr 4.7 an exciting new feature was added that allows one to page through a 
complete result set without having to worry about missing or double results at 
page boundaries while keeping resource utilization low.

I have a common use case that has similar performance and consistency problems 
that could be solved by extending the way CursorMarks work:

A. The user executes a search and obtains thousands of results of which he sees 
the first 'page'.
   Apart from scrolling through the list he also has a scrollbar (or paging 
controls) to jump to anywhere in the list.
B. The user uses the scrollbar to jump to an arbitrary place in the list.
C. The user scrolls down a bit (but past the current 'page') to find what he's 
looking for.
D. The user realizes he's too far down and scrolls up a bit again (but before 
the current 'page' again...)

(Yes, I know that users should be educated to refine their search, but 
unfortunately, if the client for which the application is developed specifies 
that it should be possible to use it this way...)

For the moment this is implemented by using the start/rows parameters to get 
the appropriate 'page' and this has the disadvantages that cursorMark solves:
- Solr (actually I use Lucene directly, but that doesn't matter here) needs to 
store *all* documents up to document (start+rows) to be able to returns just 
the rows requested. Except for step A (where start==0), this may be a huge 
performance hit.
- If the index is modified concurrently (especially when using NRT), jumping to 
the next/previous page can cause documents being repeated or skipped at page 
boundaries (as explained in 
https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results)

Here's the way an extension to the cursorMark system could solve the problem:
A. Solr/Lucene executes the search and returns the total number of hits and the 
requested number of top documents.
   start=0, rows=n, cursorMark=*
B. start=x, rows=n, cursorMark=*: Here Solr should allow combining both 
start!=0 and cursorMark=*. It should execute a normal request using start=x and 
rows=n and add two cursorMarks : on corresponding to the sort values of the 
first document and one corresponding to the sort values of the last document
C. Use cursorMark to get the 'next' pages: This is the same way cursorMark 
works for the moment:  the user passes the cursorMark corresponding to the sort 
values of the last document.
D. Use the cursorMark corresponding to the sort values of the first document to 
get the 'previous' pages.
a
In terms of implementing these changes, I've been looking at the source code 
and already did the easy ones :)
- If a cursorMark is passed (either cursorMark=* or a 'real' value), Solr 
should return two cursorMarks in the result: nextCursorMark as before and 
prevCursorMark corresponding to the sort values of the first document. Done.
- start!=0 and cursorMark=* should no longer be mutually exclusive (but 
start!=0 and cursorMark!=* should). Done.
- When returning a result using a cursorMark, the start value returned should 
correspond to the actual position of the first document in the full result set. 
 For the next page, this equals to the number of documents skipped during 
processing, but unfortunately I didn't see a way (yet) to pass that information 
along everywhere.  This start value, together with the (possibly changed) 
numFound value can be used in the GUI to adjust the position of the scrollbar 
or the paging controls accordingly without having to estimate it.
- Implementing reverse paging could actually be easier than it sounds by 
internally reversing the sort order (really reversing, not just reversing 
ASC/DESC!) using the cursor as in the normal case and afterwards reversing the 
obtained list of documents.  I've updated PagingFieldCollector in 
TopFieldCollector.java by negating the values in reverseMul and overriding 
topDocs(start, howMany), but have to check everywhere partial results are 
merged as well...
- Implement a corresponding amount of test cases for the paging up case as that 
exist for the paging down case (help! :)

While working on the code, I thought of another use case as well: refreshing 
the current page:
Instead of passing the same start value again, the prevCursorMark could be 
passed, but with a hint that the document on or after this cursorMark should be 
returned.

Which brings me to the question of how to specify the new behavior to Solr 
without affecting the current behavior.

I propose that prevCursorMark and nextCursorMark simply encode the sort values 
for the first and last document (as nextCursorMark does now) and that a simple 
prefix is used when cursorMark should be used differently:
: documents after the cursor position: use with nextCursorMark to get the 
next page of results
=: documents after or on the cursor position: use with prevCursorMark to 
refresh the same page keeping the same sort position for the first document
: documents before 

[jira] [Commented] (LUCENE-5583) Add DataInput.skipBytes

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965083#comment-13965083
 ] 

Uwe Schindler commented on LUCENE-5583:
---

Thanks! After hopefully getting some comments on LUCENE-3237, I will branch 4.8!

 Add DataInput.skipBytes
 ---

 Key: LUCENE-5583
 URL: https://issues.apache.org/jira/browse/LUCENE-5583
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Blocker
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5583.patch, LUCENE-5583.patch


 I was playing with on-the-fly checksum verification and this made me stumble 
 upon an issue with {{BufferedChecksumIndexInput}}.
 I have some code that skips over a {{DataInput}} by reading bytes into 
 /dev/null, eg.
 {code}
   private static final byte[] SKIP_BUFFER = new byte[1024];
   private static void skipBytes(DataInput in, long numBytes) throws 
 IOException {
 assert numBytes = 0;
 for (long skipped = 0; skipped  numBytes; ) {
   final int toRead = (int) Math.min(numBytes - skipped, 
 SKIP_BUFFER.length);
   in.readBytes(SKIP_BUFFER, 0, toRead);
   skipped += toRead;
 }
   }
 {code}
 It is fine to read into this static buffer, even from multiple threads, since 
 the content that is read doesn't matter here. However, it breaks with 
 {{BufferedChecksumIndexInput}} because of the way that it updates the 
 checksum:
 {code}
   @Override
   public void readBytes(byte[] b, int offset, int len)
 throws IOException {
 main.readBytes(b, offset, len);
 digest.update(b, offset, len);
   }
 {code}
 If you are unlucky enough so that a concurrent call to {{skipBytes}} started 
 modifying the content of {{b}} before the call to {{digest.update(b, offset, 
 len)}} finished, then your checksum will be wrong.
 I think we should make {{BufferedChecksumIndexInput}} read into a private 
 buffer first instead of relying on the user-provided buffer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5583) Add DataInput.skipBytes

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965083#comment-13965083
 ] 

Uwe Schindler edited comment on LUCENE-5583 at 4/10/14 7:18 AM:


Thanks! After hopefully getting some comments on LUCENE-5588, I will branch 4.8!


was (Author: thetaphi):
Thanks! After hopefully getting some comments on LUCENE-3237, I will branch 4.8!

 Add DataInput.skipBytes
 ---

 Key: LUCENE-5583
 URL: https://issues.apache.org/jira/browse/LUCENE-5583
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Blocker
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5583.patch, LUCENE-5583.patch


 I was playing with on-the-fly checksum verification and this made me stumble 
 upon an issue with {{BufferedChecksumIndexInput}}.
 I have some code that skips over a {{DataInput}} by reading bytes into 
 /dev/null, eg.
 {code}
   private static final byte[] SKIP_BUFFER = new byte[1024];
   private static void skipBytes(DataInput in, long numBytes) throws 
 IOException {
 assert numBytes = 0;
 for (long skipped = 0; skipped  numBytes; ) {
   final int toRead = (int) Math.min(numBytes - skipped, 
 SKIP_BUFFER.length);
   in.readBytes(SKIP_BUFFER, 0, toRead);
   skipped += toRead;
 }
   }
 {code}
 It is fine to read into this static buffer, even from multiple threads, since 
 the content that is read doesn't matter here. However, it breaks with 
 {{BufferedChecksumIndexInput}} because of the way that it updates the 
 checksum:
 {code}
   @Override
   public void readBytes(byte[] b, int offset, int len)
 throws IOException {
 main.readBytes(b, offset, len);
 digest.update(b, offset, len);
   }
 {code}
 If you are unlucky enough so that a concurrent call to {{skipBytes}} started 
 modifying the content of {{b}} before the call to {{digest.update(b, offset, 
 len)}} finished, then your checksum will be wrong.
 I think we should make {{BufferedChecksumIndexInput}} read into a private 
 buffer first instead of relying on the user-provided buffer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5583) Add DataInput.skipBytes

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965086#comment-13965086
 ] 

ASF subversion and git services commented on LUCENE-5583:
-

Commit 1586232 from jpou...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1586232 ]

LUCENE-5583: Add DataInput.skipBytes, ChecksumIndexInput can now seek forward.

 Add DataInput.skipBytes
 ---

 Key: LUCENE-5583
 URL: https://issues.apache.org/jira/browse/LUCENE-5583
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Blocker
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5583.patch, LUCENE-5583.patch


 I was playing with on-the-fly checksum verification and this made me stumble 
 upon an issue with {{BufferedChecksumIndexInput}}.
 I have some code that skips over a {{DataInput}} by reading bytes into 
 /dev/null, eg.
 {code}
   private static final byte[] SKIP_BUFFER = new byte[1024];
   private static void skipBytes(DataInput in, long numBytes) throws 
 IOException {
 assert numBytes = 0;
 for (long skipped = 0; skipped  numBytes; ) {
   final int toRead = (int) Math.min(numBytes - skipped, 
 SKIP_BUFFER.length);
   in.readBytes(SKIP_BUFFER, 0, toRead);
   skipped += toRead;
 }
   }
 {code}
 It is fine to read into this static buffer, even from multiple threads, since 
 the content that is read doesn't matter here. However, it breaks with 
 {{BufferedChecksumIndexInput}} because of the way that it updates the 
 checksum:
 {code}
   @Override
   public void readBytes(byte[] b, int offset, int len)
 throws IOException {
 main.readBytes(b, offset, len);
 digest.update(b, offset, len);
   }
 {code}
 If you are unlucky enough so that a concurrent call to {{skipBytes}} started 
 modifying the content of {{b}} before the call to {{digest.update(b, offset, 
 len)}} finished, then your checksum will be wrong.
 I think we should make {{BufferedChecksumIndexInput}} read into a private 
 buffer first instead of relying on the user-provided buffer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5583) Add DataInput.skipBytes

2014-04-10 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-5583.
--

Resolution: Fixed
  Assignee: Adrien Grand

Committed to both branches, thanks Simon, Uwe and Mike!

 Add DataInput.skipBytes
 ---

 Key: LUCENE-5583
 URL: https://issues.apache.org/jira/browse/LUCENE-5583
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Blocker
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5583.patch, LUCENE-5583.patch


 I was playing with on-the-fly checksum verification and this made me stumble 
 upon an issue with {{BufferedChecksumIndexInput}}.
 I have some code that skips over a {{DataInput}} by reading bytes into 
 /dev/null, eg.
 {code}
   private static final byte[] SKIP_BUFFER = new byte[1024];
   private static void skipBytes(DataInput in, long numBytes) throws 
 IOException {
 assert numBytes = 0;
 for (long skipped = 0; skipped  numBytes; ) {
   final int toRead = (int) Math.min(numBytes - skipped, 
 SKIP_BUFFER.length);
   in.readBytes(SKIP_BUFFER, 0, toRead);
   skipped += toRead;
 }
   }
 {code}
 It is fine to read into this static buffer, even from multiple threads, since 
 the content that is read doesn't matter here. However, it breaks with 
 {{BufferedChecksumIndexInput}} because of the way that it updates the 
 checksum:
 {code}
   @Override
   public void readBytes(byte[] b, int offset, int len)
 throws IOException {
 main.readBytes(b, offset, len);
 digest.update(b, offset, len);
   }
 {code}
 If you are unlucky enough so that a concurrent call to {{skipBytes}} started 
 modifying the content of {{b}} before the call to {{digest.update(b, offset, 
 len)}} finished, then your checksum will be wrong.
 I think we should make {{BufferedChecksumIndexInput}} read into a private 
 buffer first instead of relying on the user-provided buffer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5586) BufferedChecksumIndexInput is not cloneable

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965101#comment-13965101
 ] 

Uwe Schindler commented on LUCENE-5586:
---

OK, that's fine!
I don't think this causes bugs at the moment, but it prevents misuse.

 BufferedChecksumIndexInput is not cloneable
 ---

 Key: LUCENE-5586
 URL: https://issues.apache.org/jira/browse/LUCENE-5586
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5586.patch


 {{BufferedChecksumIndexInput}} implements {{Cloneable}}, yet its close method 
 would return a shallow copy that still wraps the same {{IndexInput}} and 
 {{Checksum}}. This is trappy, because reading on the clone would also read on 
 the original instance and update the checksum.
 Since {{Checksum}} are not cloneable, I think {{ChecksumIndexInput.clone}} 
 should just throw an UOE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5586) BufferedChecksumIndexInput is not cloneable

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965103#comment-13965103
 ] 

Uwe Schindler commented on LUCENE-5586:
---

In general, IndexInputs are cloneable, but this should apply only to the ones 
retrieved from directory. Wrapping indexinputs should either:
- clone the delegate
- throw UOE

We should fix that in a separate issue maybe?

 BufferedChecksumIndexInput is not cloneable
 ---

 Key: LUCENE-5586
 URL: https://issues.apache.org/jira/browse/LUCENE-5586
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5586.patch


 {{BufferedChecksumIndexInput}} implements {{Cloneable}}, yet its close method 
 would return a shallow copy that still wraps the same {{IndexInput}} and 
 {{Checksum}}. This is trappy, because reading on the clone would also read on 
 the original instance and update the checksum.
 Since {{Checksum}} are not cloneable, I think {{ChecksumIndexInput.clone}} 
 should just throw an UOE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5586) BufferedChecksumIndexInput is not cloneable

2014-04-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965105#comment-13965105
 ] 

Adrien Grand commented on LUCENE-5586:
--

The current patch makes {{ChecksumIndexInput.clone}} throw an UOE although 
there might be native (unwrapped) implementations of it, so maybe the 
exception should rather be on {{BufferedChecksumIndexInput}}?

bq. We should fix that in a separate issue maybe?

I quickly looked at the IndexInput impls that we have and the other ones seem 
to be fine (tests would likely catch it otherwise).

 BufferedChecksumIndexInput is not cloneable
 ---

 Key: LUCENE-5586
 URL: https://issues.apache.org/jira/browse/LUCENE-5586
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5586.patch


 {{BufferedChecksumIndexInput}} implements {{Cloneable}}, yet its close method 
 would return a shallow copy that still wraps the same {{IndexInput}} and 
 {{Checksum}}. This is trappy, because reading on the clone would also read on 
 the original instance and update the checksum.
 Since {{Checksum}} are not cloneable, I think {{ChecksumIndexInput.clone}} 
 should just throw an UOE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5586) BufferedChecksumIndexInput is not cloneable

2014-04-10 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5586:
-

Attachment: LUCENE-5586.patch

Fixed the patch to move the UOE from {{ChecksumIndexInput}} to 
{{BufferedChecksumIndexInput}}.

Otherwise if you write a non-wrapping impl of ChecksumIndexInput, you cannot 
clone using {{super.clone()}.

 BufferedChecksumIndexInput is not cloneable
 ---

 Key: LUCENE-5586
 URL: https://issues.apache.org/jira/browse/LUCENE-5586
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5586.patch, LUCENE-5586.patch


 {{BufferedChecksumIndexInput}} implements {{Cloneable}}, yet its close method 
 would return a shallow copy that still wraps the same {{IndexInput}} and 
 {{Checksum}}. This is trappy, because reading on the clone would also read on 
 the original instance and update the checksum.
 Since {{Checksum}} are not cloneable, I think {{ChecksumIndexInput.clone}} 
 should just throw an UOE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5586) BufferedChecksumIndexInput is not cloneable

2014-04-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965111#comment-13965111
 ] 

Adrien Grand edited comment on LUCENE-5586 at 4/10/14 8:05 AM:
---

Fixed the patch to move the UOE from {{ChecksumIndexInput}} to 
{{BufferedChecksumIndexInput}}.

Otherwise if you write a non-wrapping impl of ChecksumIndexInput, you cannot 
clone using {{super.clone()}}.


was (Author: jpountz):
Fixed the patch to move the UOE from {{ChecksumIndexInput}} to 
{{BufferedChecksumIndexInput}}.

Otherwise if you write a non-wrapping impl of ChecksumIndexInput, you cannot 
clone using {{super.clone()}.

 BufferedChecksumIndexInput is not cloneable
 ---

 Key: LUCENE-5586
 URL: https://issues.apache.org/jira/browse/LUCENE-5586
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5586.patch, LUCENE-5586.patch


 {{BufferedChecksumIndexInput}} implements {{Cloneable}}, yet its close method 
 would return a shallow copy that still wraps the same {{IndexInput}} and 
 {{Checksum}}. This is trappy, because reading on the clone would also read on 
 the original instance and update the checksum.
 Since {{Checksum}} are not cloneable, I think {{ChecksumIndexInput.clone}} 
 should just throw an UOE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5586) BufferedChecksumIndexInput is not cloneable

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965115#comment-13965115
 ] 

ASF subversion and git services commented on LUCENE-5586:
-

Commit 1586239 from jpou...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1586239 ]

LUCENE-5586: BufferedChecksumIndexInput.clone now throws an 
UnsupportedOperationException.

 BufferedChecksumIndexInput is not cloneable
 ---

 Key: LUCENE-5586
 URL: https://issues.apache.org/jira/browse/LUCENE-5586
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5586.patch, LUCENE-5586.patch


 {{BufferedChecksumIndexInput}} implements {{Cloneable}}, yet its close method 
 would return a shallow copy that still wraps the same {{IndexInput}} and 
 {{Checksum}}. This is trappy, because reading on the clone would also read on 
 the original instance and update the checksum.
 Since {{Checksum}} are not cloneable, I think {{ChecksumIndexInput.clone}} 
 should just throw an UOE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5586) BufferedChecksumIndexInput is not cloneable

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965118#comment-13965118
 ] 

ASF subversion and git services commented on LUCENE-5586:
-

Commit 1586240 from jpou...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1586240 ]

LUCENE-5586: BufferedChecksumIndexInput.clone now throws an 
UnsupportedOperationException.

 BufferedChecksumIndexInput is not cloneable
 ---

 Key: LUCENE-5586
 URL: https://issues.apache.org/jira/browse/LUCENE-5586
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5586.patch, LUCENE-5586.patch


 {{BufferedChecksumIndexInput}} implements {{Cloneable}}, yet its close method 
 would return a shallow copy that still wraps the same {{IndexInput}} and 
 {{Checksum}}. This is trappy, because reading on the clone would also read on 
 the original instance and update the checksum.
 Since {{Checksum}} are not cloneable, I think {{ChecksumIndexInput.clone}} 
 should just throw an UOE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5586) BufferedChecksumIndexInput is not cloneable

2014-04-10 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-5586.
--

   Resolution: Fixed
Fix Version/s: 4.8
 Assignee: Adrien Grand

 BufferedChecksumIndexInput is not cloneable
 ---

 Key: LUCENE-5586
 URL: https://issues.apache.org/jira/browse/LUCENE-5586
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.8

 Attachments: LUCENE-5586.patch, LUCENE-5586.patch


 {{BufferedChecksumIndexInput}} implements {{Cloneable}}, yet its close method 
 would return a shallow copy that still wraps the same {{IndexInput}} and 
 {{Checksum}}. This is trappy, because reading on the clone would also read on 
 the original instance and update the checksum.
 Since {{Checksum}} are not cloneable, I think {{ChecksumIndexInput.clone}} 
 should just throw an UOE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965159#comment-13965159
 ] 

Uwe Schindler commented on LUCENE-3237:
---

Hi another issue:

In FSIndexOutput:
{code:java}
file.getFD().sync();
{code}

This does not do the for-loop we currently do to repeat the fsync 5 times if it 
fails. We should maybe add this here, too. Also, I would not remove 
Directory.sync(), we should maybe leave this for LUCENE-5588 to sync the 
directory itsself. But the method signature would change in any case.

 FSDirectory.fsync() may not work properly
 -

 Key: LUCENE-3237
 URL: https://issues.apache.org/jira/browse/LUCENE-3237
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Shai Erera
 Attachments: LUCENE-3237.patch


 Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
 FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
 written to the file by other FileDescriptors. It would be better if we do 
 this operation on the actual RAF/FileOS which wrote the data. We can add 
 sync() to IndexOutput and FSIndexOutput will do that.
 Directory-wise, we should stop syncing on file names, and instead sync on the 
 IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965162#comment-13965162
 ] 

Michael McCandless commented on LUCENE-3237:


bq. In fact, fsync syncs the whole file, because it relies on fsync() POSIX API 
or FlushFileBuffers() in Windows. Both really sync the file the descriptor is 
pointing to. Those functions don't sync the descriptor's buffers only.

This is my impression as well, and as Yonik said, it's hard to imagine any 
[sane] operating system doing it differently ... so this really is paranoia.

bq. {{FSDirectory.FSIndexOutput#sync()}} should call flush() before syncing the 
underlying file.

OK I'll move it there (I'm currently doing it in the first close attempt).

bq. This does not do the for-loop we currently do to repeat the fsync 5 times 
if it fails.

I'll add an IOUtils.sync that takes an fd and does the retry thing.

bq. Also, I would not remove Directory.sync(), we should maybe leave this for 
LUCENE-5588 to sync the directory itsself.

Right, we should add it back, as a method taking no file args?  Its purpose 
would be LUCENE-5588.

 FSDirectory.fsync() may not work properly
 -

 Key: LUCENE-3237
 URL: https://issues.apache.org/jira/browse/LUCENE-3237
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Shai Erera
 Attachments: LUCENE-3237.patch


 Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
 FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
 written to the file by other FileDescriptors. It would be better if we do 
 this operation on the actual RAF/FileOS which wrote the data. We can add 
 sync() to IndexOutput and FSIndexOutput will do that.
 Directory-wise, we should stop syncing on file names, and instead sync on the 
 IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965182#comment-13965182
 ] 

Simon Willnauer commented on LUCENE-3237:
-

Hey mike, thanks for reopening this. I like the patch since it fixes multiple 
issues. 

 * I like the fact that we get rid of the general unsynced files stuff in 
Directory.
 * given the last point we move it in the right place inside IW that is where 
it should be
 * the problem that the current patch has is that is holds on to the buffers in 
BufferedIndexOutput. I think we need to work around this here are a couple of 
ideas:
  **  introduce a SyncHandle class that we can pull from IndexOutput that 
allows to close the IndexOutput but lets you fsync after the fact
  ** this handle can be refcounted internally and we just decrement the count 
on IndexOutput#close() as well as on SyncHandle#close() 
  ** we can just hold on to the SyncHandle until we need to sync in IW 
  ** since this will basically close the underlying FD later we might want to 
think about size-bounding the number of unsynced files and maybe let indexing 
threads fsync them concurrently? maybe something we can do later.
  ** if we know we flush for commit we can already fsync directly which might 
safe resources / time since it might be concurrent

just a couple of ideas

 FSDirectory.fsync() may not work properly
 -

 Key: LUCENE-3237
 URL: https://issues.apache.org/jira/browse/LUCENE-3237
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Shai Erera
 Attachments: LUCENE-3237.patch


 Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
 FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
 written to the file by other FileDescriptors. It would be better if we do 
 this operation on the actual RAF/FileOS which wrote the data. We can add 
 sync() to IndexOutput and FSIndexOutput will do that.
 Directory-wise, we should stop syncing on file names, and instead sync on the 
 IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965185#comment-13965185
 ] 

Adrien Grand commented on LUCENE-5588:
--

bq. In fact. it also does not work on Linux, see 
http://permalink.gmane.org/gmane.comp.standards.posix.austin.general/6952

FYI, the same person who reported this bug wrote an interesting blog post about 
fsync at 
http://blog.httrack.com/blog/2013/11/15/everything-you-always-wanted-to-know-about-fsync/

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965208#comment-13965208
 ] 

Uwe Schindler commented on LUCENE-5588:
---

Cool, thanks. Nice blog post! In fact our current patch should be fine then?

Should we commit it to trunk and branch_4x? I will also check MacOSX on my VM 
to validate if it also works on OSX, so i can modify the assert to check that 
the sync succeeds on OSX. Currently it only asserts on Linux that no errors 
occurred.

According to the blog post, windows does not work at all, so we are fine with 
the optimization (early exit).

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5584) Allow FST read method to also recycle the output value when traversing FST

2014-04-10 Thread Christian Ziech (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965215#comment-13965215
 ] 

Christian Ziech commented on LUCENE-5584:
-

Thx for the very quick and helpful replies. It seems that I owe you some more 
hard and concrete information on our use case, what we exactly do and our 
environment.
About the environment - the tests were run with
{quote}
java version 1.7.0_45
OpenJDK Runtime Environment (rhel-2.4.3.4.el6_5-x86_64 u45-b15)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)
{quote}
on a CentOS 6.5. Our vm options don't enable the tlab right now but I'm 
definitely consider using it for other reasons. Currently we are running with 
the following (gc relevant) arguments: -Xmx6g -XX:MaxNewSize=700m 
-XX:+UseConcMarkSweepGC -XX:MaxDirectMemorySize=35g. 

I'm not so much worried about the get performance although that could be 
improved as well. We are using lucenes LevenshteinAutomata class to generate a 
couple of Levenshtein automatons with edit distance 1 or 2 (one for each search 
term), build the union of them and intersect them with our FST using a modified 
version of the method 
org.apache.lucene.search.suggest.analyzing.FSTUtil.intersectPrefixPaths() which 
uses a callback method to push every matched entry instead of returning the 
whole list of paths (for efficiency reasons as well: we don't actually need the 
byte arrays but we want to parse them into a value object, hence reusing the 
output byte array is ok for us).
Our FST has about 500M entires and each entry has a value of approx. 10-20 
bytes. That produces for a random query with 4 terms (and hence a union of 4 
levenshtein automatons) an amount of ~2M visited nodes with output (hence 2M 
created temporary byte []) and a total size ~7.5M for the temporary byte arrays 
(+ the overhead per instance). In that experiment I matched about 10k terms in 
the FST. Those numbers are taking into account that we already used our own add 
implementation that writes to always the same BytesRef instance when adding 
outputs.
The overall impact on the GC and also the execution speed of the method was 
rather significant in total - I can try to dig up numbers for that but they 
would be rather application specific.

Does this help and answers all the questions so far?

Btw: Experimenting a little with the change I noticed that things may be a 
slightly more complicated since the output of a node is often overwritten with 
NO_OUTPUT from the Outputs - so that method would need to recycle the current 
output as well if possible but that may have interesting side effects - but 
hopefully that should be manageable.

 Allow FST read method to also recycle the output value when traversing FST
 --

 Key: LUCENE-5584
 URL: https://issues.apache.org/jira/browse/LUCENE-5584
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7.1
Reporter: Christian Ziech

 The FST class heavily reuses Arc instances when traversing the FST. The 
 output of an Arc however is not reused. This can especially be important when 
 traversing large portions of a FST and using the ByteSequenceOutputs and 
 CharSequenceOutputs. Those classes create a new byte[] or char[] for every 
 node read (which has an output).
 In our use case we intersect a lucene Automaton with a FSTBytesRef much 
 like it is done in 
 org.apache.lucene.search.suggest.analyzing.FSTUtil.intersectPrefixPaths() and 
 since the Automaton and the FST are both rather large tens or even hundreds 
 of thousands of temporary byte array objects are created.
 One possible solution to the problem would be to change the 
 org.apache.lucene.util.fst.Outputs class to have two additional methods (if 
 you don't want to change the existing methods for compatibility):
 {code}
   /** Decode an output value previously written with {@link
*  #write(Object, DataOutput)} reusing the object passed in if possible */
   public abstract T read(DataInput in, T reuse) throws IOException;
   /** Decode an output value previously written with {@link
*  #writeFinalOutput(Object, DataOutput)}.  By default this
*  just calls {@link #read(DataInput)}. This tries to  reuse the object   
*  passed in if possible */
   public T readFinalOutput(DataInput in, T reuse) throws IOException {
 return read(in, reuse);
   }
 {code}
 The new methods could then be used in the FST in the readNextRealArc() method 
 passing in the output of the reused Arc. For most inputs they could even just 
 invoke the original read(in) method.
 If you should decide to make that change I'd be happy to supply a patch 
 and/or tests for the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965214#comment-13965214
 ] 

Michael McCandless commented on LUCENE-3237:


Thanks Simon.

bq. Hey mike, thanks for reopening this. 

I actually didn't reopen yet ... because I do think this really is
paranoia.  The OS man pages make the semantics clear, and what we are
doing today (reopen the file for syncing) is correct.

bq. I like the fact that we get rid of the general unsynced files stuff in 
Directory.
bq. given the last point we move it in the right place inside IW that is where 
it should be

Yeah I really like that.

But, we could do that separately, i.e. add private tracking inside IW
of which newly written file names haven't been sync'd.

bq. the problem that the current patch has is that is holds on to the buffers 
in BufferedIndexOutput. I think we need to work around this here are a couple 
of ideas:
bq. introduce a SyncHandle class that we can pull from IndexOutput that allows 
to close the IndexOutput but lets you fsync after the fact

I think that's a good idea.  For FSDir impls this is just a thin
wrapper around FileDescriptor.

bq. this handle can be refcounted internally and we just decrement the count on 
IndexOutput#close() as well as on SyncHandle#close()
bq. we can just hold on to the SyncHandle until we need to sync in IW

Ref counting may be overkill?  Who else will be pulling/sharing this
sync handle?  Maybe we can add a IndexOutput.closeToSyncHandle, the
IndexOutput flushes and is unusable from then on, but returns the sync
handle which the caller must later close.

One downside of moving to this API is ... it rules out writing some
bytes, fsyncing, writing some more, fsyncing, e.g. if we wanted to add
a transaction log impl on top of Lucene.  But I think that's OK
(design for today).  There are other limitations in IndexOuput for
xlog impl...

bq.since this will basically close the underlying FD later we might want to 
think about size-bounding the number of unsynced files and maybe let indexing 
threads fsync them concurrently? maybe something we can do later.
bq.if we know we flush for commit we can already fsync directly which might 
safe resources / time since it might be concurrent

Yeah we can pursue this in phase 2.  The OS will generally move
dirty buffers to stable storage anyway over time, so the cost of
fsyncing files written (relatively) long ago (10s of seconds; on linux
I think the default is usually 30 seconds) will usually be low.  The
problem is on some filesystems fsync can be unexpectedly costly (there
was a famous case in ext3
https://bugzilla.mozilla.org/show_bug.cgi?id=421482 but this has been
fixed), so we need to be careful about this.


 FSDirectory.fsync() may not work properly
 -

 Key: LUCENE-3237
 URL: https://issues.apache.org/jira/browse/LUCENE-3237
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Shai Erera
 Attachments: LUCENE-3237.patch


 Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
 FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
 written to the file by other FileDescriptors. It would be better if we do 
 this operation on the actual RAF/FileOS which wrote the data. We can add 
 sync() to IndexOutput and FSIndexOutput will do that.
 Directory-wise, we should stop syncing on file names, and instead sync on the 
 IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 2157 - Failure!

2014-04-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/2157/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/trunk
org.tmatesoft.svn.core.SVNException: svn: E175002: PROPFIND 
/repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.findStartingProperties(DAVUtil.java:136)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.fetchRepositoryUUID(DAVConnection.java:120)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:150)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:910)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:891)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:874)
at hudson.FilePath.act(FilePath.java:914)
at hudson.FilePath.act(FilePath.java:887)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:850)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:788)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1320)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:518)
at hudson.model.Run.execute(Run.java:1689)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: svn: E175002: PROPFIND /repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.doPropfind(DAVConnection.java:131)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getProperties(DAVUtil.java:73)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getResourceProperties(DAVUtil.java:79)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getStartingProperties(DAVUtil.java:103)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.findStartingProperties(DAVUtil.java:125)
... 32 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E175002: PROPFIND request 
failed on '/repos/asf/lucene/dev/trunk'
svn: E175002: Connection reset
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 

[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 2159 - Failure!

2014-04-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/2159/

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at 
__randomizedtesting.SeedInfo.seed([982687357BB46D83:F27D382422FA4D70]:0)
at java.lang.Integer.valueOf(Integer.java:642)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:125)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:702)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:613)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:920)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)




Build Log:
[...truncated 1049 lines...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2 TEST FAIL: useCharFilter=false text='({ unfx mcmgcb iy mwmzgbi 
S%U\u8d27\udabd\udd3f\u4efa@4\u0324\ue58e\u3df6\u26d61 \ufe59 ngtnemdl a 
jpoqwniv \u0f37\uda78\udccb\u0413\uef41 \u0288\u0276\u027f\u025a\u02a0 
\u19c31@\uf4cdG wdfvjeue \uf183\uf5ee\udab5\ude6c\u02e8\uda18\ude88\uaa09\u02f9 
\ud7e8\ud7ff\ud7d6\ud7d8\ud7bd\ud7da\ud7dd ulttugi 
\u017a\ud9f3\ude4e\u05a7(\u07c7{\uffa3 '
   [junit4]   2 Exception from random analyzer: 
   [junit4]   2 charfilters=
   [junit4]   2 tokenizer=
   [junit4]   2   org.apache.lucene.analysis.ngram.NGramTokenizer(LUCENE_50, 
org.apache.lucene.util.AttributeSource$AttributeFactory$DefaultAttributeFactory@4678ccfc,
 37, 91)
   [junit4]   2 filters=
   [junit4]   2   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingTokenFilter@48a7406c 
term=,bytes=[],positionIncrement=1,positionLength=1,startOffset=0,endOffset=0,type=word,
 45)
   [junit4]   2   
org.apache.lucene.analysis.de.GermanStemFilter(ValidatingTokenFilter@3927dd17 
term=,bytes=[],positionIncrement=1,positionLength=1,startOffset=0,endOffset=0,type=word,keyword=false)
   [junit4]   2   

[jira] [Commented] (SOLR-5932) DIH: retry query on terminating connection due to conflict with recovery

2014-04-10 Thread Gunnlaugur Thor Briem (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965278#comment-13965278
 ] 

Gunnlaugur Thor Briem commented on SOLR-5932:
-

One way to do this generically: define a set of exception predicates, e.g. 
{{retry_on_errors}} or something like that, which could be set or augmented in 
configuration. Each might be as simple as a regular expression to be matched 
against the exception message. When an exception is caught, the predicates are 
iterated and each applied to the exception. If a predicate evaluates as true 
(there's a match), then the exception is identified as one for which a retry is 
appropriate, and the import operation continues (unless the error repeats; 
maybe N retries with an exponential backoff, in case of a DB restart or 
momentary network hiccup or such).

That's reasonably general, i.e. not specific to any one DB engine, and can be 
extended by users by adding a regexp in an appropriate spot in 
{{db-data-config.xml}}

 DIH: retry query on terminating connection due to conflict with recovery
 --

 Key: SOLR-5932
 URL: https://issues.apache.org/jira/browse/SOLR-5932
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Priority: Minor

 When running DIH against a hot-standby PostgreSQL database, one may randomly 
 see queries fail with this error:
 {code}
 org.apache.solr.handler.dataimport.DataImportHandlerException: 
 org.postgresql.util.PSQLException: FATAL: terminating connection due to 
 conflict with recovery
   Detail: User query might have needed to see row versions that must be 
 removed.
   Hint: In a moment you should be able to reconnect to the database and 
 repeat your command.
 {code}
 A reasonable course of action in this case is to catch the error and retry. 
 This would support the use case of doing an initial (possibly lengthy) clean 
 full-import against a hot-standby server, and then just running incremental 
 dataimports against the master.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5584) Allow FST read method to also recycle the output value when traversing FST

2014-04-10 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965282#comment-13965282
 ] 

Karl Wright commented on LUCENE-5584:
-

Hi Christian,

I think at this point, posting a proposed diff would be the best thing to do, 
with also maybe a snippet of code demonstrating our particular use case.


 Allow FST read method to also recycle the output value when traversing FST
 --

 Key: LUCENE-5584
 URL: https://issues.apache.org/jira/browse/LUCENE-5584
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7.1
Reporter: Christian Ziech

 The FST class heavily reuses Arc instances when traversing the FST. The 
 output of an Arc however is not reused. This can especially be important when 
 traversing large portions of a FST and using the ByteSequenceOutputs and 
 CharSequenceOutputs. Those classes create a new byte[] or char[] for every 
 node read (which has an output).
 In our use case we intersect a lucene Automaton with a FSTBytesRef much 
 like it is done in 
 org.apache.lucene.search.suggest.analyzing.FSTUtil.intersectPrefixPaths() and 
 since the Automaton and the FST are both rather large tens or even hundreds 
 of thousands of temporary byte array objects are created.
 One possible solution to the problem would be to change the 
 org.apache.lucene.util.fst.Outputs class to have two additional methods (if 
 you don't want to change the existing methods for compatibility):
 {code}
   /** Decode an output value previously written with {@link
*  #write(Object, DataOutput)} reusing the object passed in if possible */
   public abstract T read(DataInput in, T reuse) throws IOException;
   /** Decode an output value previously written with {@link
*  #writeFinalOutput(Object, DataOutput)}.  By default this
*  just calls {@link #read(DataInput)}. This tries to  reuse the object   
*  passed in if possible */
   public T readFinalOutput(DataInput in, T reuse) throws IOException {
 return read(in, reuse);
   }
 {code}
 The new methods could then be used in the FST in the readNextRealArc() method 
 passing in the output of the reused Arc. For most inputs they could even just 
 invoke the original read(in) method.
 If you should decide to make that change I'd be happy to supply a patch 
 and/or tests for the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5588:
--

Attachment: LUCENE-5588.patch

I cleaned up the patch:
- Reversed the loop (FileChannel is opened one time outside the loop and then 
fsync is tried 5 times). This makes the extra check for windows obsolete. This 
also goes in line what [~mikemccand] plans on LUCENE-3237 (repeating only the 
fsync on an already open IndexOutput.
- Tested MacOSX - works and added assert.

Uwe

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-5340:


Attachment: SOLR-5340.patch

- Added the ability to create a named snapshot. 
Example - /replication?command=backupname=testbackup
- For named snapshots maxNumberOfBackups and numberToKeep are ignored.
- Explicitly delete named snapshots
Example - /replication?command=deletebackupname=testbackup

 Add support for named snapshots
 ---

 Key: SOLR-5340
 URL: https://issues.apache.org/jira/browse/SOLR-5340
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5
Reporter: Mike Schrag
 Attachments: SOLR-5340.patch


 It would be really nice if Solr supported named snapshots. Right now if you 
 snapshot a SolrCloud cluster, every node potentially records a slightly 
 different timestamp. Correlating those back together to effectively restore 
 the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965312#comment-13965312
 ] 

Simon Willnauer commented on LUCENE-3237:
-

{quote} Ref counting may be overkill? Who else will be pulling/sharing this
sync handle? Maybe we can add a IndexOutput.closeToSyncHandle, the
IndexOutput flushes and is unusable from then on, but returns the sync
handle which the caller must later close.{quote}

good!

{quote}

One downside of moving to this API is ... it rules out writing some
bytes, fsyncing, writing some more, fsyncing, e.g. if we wanted to add
a transaction log impl on top of Lucene. But I think that's OK
(design for today). There are other limitations in IndexOuput for
xlog impl...

{quote}

I don't see what keeps us from adding a sync method to IndexOutput that allows 
us to bytes, fsyncing, writing some more, fsyncing. I think we should make this 
change nevertheless. This can go in today I independent from where we use it.

bq. Yeah we can pursue this in phase 2. 
agreed

 FSDirectory.fsync() may not work properly
 -

 Key: LUCENE-3237
 URL: https://issues.apache.org/jira/browse/LUCENE-3237
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Shai Erera
 Attachments: LUCENE-3237.patch


 Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
 FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
 written to the file by other FileDescriptors. It would be better if we do 
 this operation on the actual RAF/FileOS which wrote the data. We can add 
 sync() to IndexOutput and FSIndexOutput will do that.
 Directory-wise, we should stop syncing on file names, and instead sync on the 
 IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5584) Allow FST read method to also recycle the output value when traversing FST

2014-04-10 Thread Christian Ziech (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965316#comment-13965316
 ] 

Christian Ziech commented on LUCENE-5584:
-

Trying to assemble the patch I came across the FST.Arc.copyFrom(Arc) method 
which unfortunately seems to implicitly assumes that the output of a node is 
immutable (which it would not be any longer). Is this immutability intended? If 
not I think that copyFrom() method would need to be moved into the FST class so 
that it can make use of the Outputs of the FST to clone the output of the 
copied arc if it is mutable ... however that would increase the size of the 
patch and possibly impact other users too ...

 Allow FST read method to also recycle the output value when traversing FST
 --

 Key: LUCENE-5584
 URL: https://issues.apache.org/jira/browse/LUCENE-5584
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7.1
Reporter: Christian Ziech

 The FST class heavily reuses Arc instances when traversing the FST. The 
 output of an Arc however is not reused. This can especially be important when 
 traversing large portions of a FST and using the ByteSequenceOutputs and 
 CharSequenceOutputs. Those classes create a new byte[] or char[] for every 
 node read (which has an output).
 In our use case we intersect a lucene Automaton with a FSTBytesRef much 
 like it is done in 
 org.apache.lucene.search.suggest.analyzing.FSTUtil.intersectPrefixPaths() and 
 since the Automaton and the FST are both rather large tens or even hundreds 
 of thousands of temporary byte array objects are created.
 One possible solution to the problem would be to change the 
 org.apache.lucene.util.fst.Outputs class to have two additional methods (if 
 you don't want to change the existing methods for compatibility):
 {code}
   /** Decode an output value previously written with {@link
*  #write(Object, DataOutput)} reusing the object passed in if possible */
   public abstract T read(DataInput in, T reuse) throws IOException;
   /** Decode an output value previously written with {@link
*  #writeFinalOutput(Object, DataOutput)}.  By default this
*  just calls {@link #read(DataInput)}. This tries to  reuse the object   
*  passed in if possible */
   public T readFinalOutput(DataInput in, T reuse) throws IOException {
 return read(in, reuse);
   }
 {code}
 The new methods could then be used in the FST in the readNextRealArc() method 
 passing in the output of the reused Arc. For most inputs they could even just 
 invoke the original read(in) method.
 If you should decide to make that change I'd be happy to supply a patch 
 and/or tests for the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5588:
--

Attachment: (was: LUCENE-5588.patch)

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5588:
--

Attachment: LUCENE-5588.patch

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965354#comment-13965354
 ] 

Michael McCandless commented on LUCENE-5588:


+1, looks great!  Thanks Uwe.

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5589) release artifacts are too large.

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5589:
---

 Summary: release artifacts are too large.
 Key: LUCENE-5589
 URL: https://issues.apache.org/jira/browse/LUCENE-5589
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Doing a release currently products *600MB* of artifacts. This is unwieldy...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5590:
---

 Summary: remove .zip binary artifacts
 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir


It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-04-10 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Attachment: SOLR-5473-74.patch

All tests now randomly use external (even collection1)

few  bug fixes

When , a single state command does multiple updates to the same collection  (as 
in split ) the first update was overwritten

When client has a collection state that is newer than server , STALE state was 
thrown. Server now updates its state

ZkStateReader.updateClusterState() was not updating external collections

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-04-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965373#comment-13965373
 ] 

Noble Paul edited comment on SOLR-5473 at 4/10/14 2:04 PM:
---

All tests now randomly use external (even collection1)

few  bug fixes

When , a single state command does multiple updates to the same collection  (as 
in split ) the first update was overwritten

When client has a collection state that is newer than server , STALE state was 
thrown. Server now updates its state

I have done all the planned testing for this , I plan to commit this to trunk 
(4x branch later) soon if there are no more concerns

ZkStateReader.updateClusterState() was not updating external collections


was (Author: noble.paul):
All tests now randomly use external (even collection1)

few  bug fixes

When , a single state command does multiple updates to the same collection  (as 
in split ) the first update was overwritten

When client has a collection state that is newer than server , STALE state was 
thrown. Server now updates its state

ZkStateReader.updateClusterState() was not updating external collections

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reassigned LUCENE-5588:
-

Assignee: Uwe Schindler

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-04-10 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-5963:
-

Affects Version/s: 5.0
  Summary: Finalize interface and backport analytics component to 
4x  (was: backport analytics component to 4x)

 Finalize interface and backport analytics component to 4x
 -

 Key: SOLR-5963
 URL: https://issues.apache.org/jira/browse/SOLR-5963
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.8, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5963.patch


 Now that we seem to have fixed up the test failures for trunk for the 
 analytics component, we need to solidify the API and back-port it to 4x. For 
 history, see SOLR-5302 and SOLR-5488.
 As far as I know, these are the merges that need to occur to do this (plus 
 any that this JIRA brings up)
 svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
 The only remaining thing I think needs to be done is to solidify the 
 interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
 although SOLR-5488 is the most relevant one.
 [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
 interested here.
 I really want to put this to bed, so if we can get agreement on this soon I 
 can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-04-10 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965395#comment-13965395
 ] 

Erick Erickson commented on SOLR-5963:
--

Unless there are objections, I plan on committing this to 4.9 after the 4.8 
branch happens so we have maximum time to let it bake in 4x before releasing it 
into the wild.

 Finalize interface and backport analytics component to 4x
 -

 Key: SOLR-5963
 URL: https://issues.apache.org/jira/browse/SOLR-5963
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.9, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5963.patch


 Now that we seem to have fixed up the test failures for trunk for the 
 analytics component, we need to solidify the API and back-port it to 4x. For 
 history, see SOLR-5302 and SOLR-5488.
 As far as I know, these are the merges that need to occur to do this (plus 
 any that this JIRA brings up)
 svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
 The only remaining thing I think needs to be done is to solidify the 
 interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
 although SOLR-5488 is the most relevant one.
 [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
 interested here.
 I really want to put this to bed, so if we can get agreement on this soon I 
 can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-04-10 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-5963:
-

Affects Version/s: (was: 4.8)
   4.9

 Finalize interface and backport analytics component to 4x
 -

 Key: SOLR-5963
 URL: https://issues.apache.org/jira/browse/SOLR-5963
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.9, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5963.patch


 Now that we seem to have fixed up the test failures for trunk for the 
 analytics component, we need to solidify the API and back-port it to 4x. For 
 history, see SOLR-5302 and SOLR-5488.
 As far as I know, these are the merges that need to occur to do this (plus 
 any that this JIRA brings up)
 svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
 The only remaining thing I think needs to be done is to solidify the 
 interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
 although SOLR-5488 is the most relevant one.
 [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
 interested here.
 I really want to put this to bed, so if we can get agreement on this soon I 
 can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5591) ReaderAndUpdates should create a proper IOContext when writing DV updates

2014-04-10 Thread Shai Erera (JIRA)
Shai Erera created LUCENE-5591:
--

 Summary: ReaderAndUpdates should create a proper IOContext when 
writing DV updates
 Key: LUCENE-5591
 URL: https://issues.apache.org/jira/browse/LUCENE-5591
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shai Erera


Today we pass IOContext.DEFAULT. If DV updates are used in conjunction w/ 
NRTCachingDirectory, it means the latter will attempt to write the entire DV 
field in its RAMDirectory, which could lead to OOM.

Would be good if we can build our own FlushInfo, estimating the number of bytes 
we're about to write. I didn't see off hand a quick way to guesstimate that - I 
thought to use the current DV's sizeInBytes as an approximation, but I don't 
see a way to get it, not a direct way at least.

Maybe we can use the size of the in-memory updates to guesstimate that amount? 
Something like {{sizeOfInMemUpdates * (maxDoc/numUpdatedDocs)}}? Is it a too 
wild guess?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5591) ReaderAndUpdates should create a proper IOContext when writing DV updates

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965403#comment-13965403
 ] 

Michael McCandless commented on LUCENE-5591:


+1, good catch.

I think that guesstimate is a good start?  It likely wildly over-estimates 
though, since in-RAM structures are usually much more costly than the on-disk 
format, maybe try it out and see how much it over-estimates?

 ReaderAndUpdates should create a proper IOContext when writing DV updates
 -

 Key: LUCENE-5591
 URL: https://issues.apache.org/jira/browse/LUCENE-5591
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shai Erera

 Today we pass IOContext.DEFAULT. If DV updates are used in conjunction w/ 
 NRTCachingDirectory, it means the latter will attempt to write the entire DV 
 field in its RAMDirectory, which could lead to OOM.
 Would be good if we can build our own FlushInfo, estimating the number of 
 bytes we're about to write. I didn't see off hand a quick way to guesstimate 
 that - I thought to use the current DV's sizeInBytes as an approximation, but 
 I don't see a way to get it, not a direct way at least.
 Maybe we can use the size of the in-memory updates to guesstimate that 
 amount? Something like {{sizeOfInMemUpdates * (maxDoc/numUpdatedDocs)}}? Is 
 it a too wild guess?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5591) ReaderAndUpdates should create a proper IOContext when writing DV updates

2014-04-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965405#comment-13965405
 ] 

Shai Erera commented on LUCENE-5591:


I will. I think over estimating is better than under estimating in that case, 
since worse case the files will be flushed to disk, rather than app hits OOM.

 ReaderAndUpdates should create a proper IOContext when writing DV updates
 -

 Key: LUCENE-5591
 URL: https://issues.apache.org/jira/browse/LUCENE-5591
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shai Erera

 Today we pass IOContext.DEFAULT. If DV updates are used in conjunction w/ 
 NRTCachingDirectory, it means the latter will attempt to write the entire DV 
 field in its RAMDirectory, which could lead to OOM.
 Would be good if we can build our own FlushInfo, estimating the number of 
 bytes we're about to write. I didn't see off hand a quick way to guesstimate 
 that - I thought to use the current DV's sizeInBytes as an approximation, but 
 I don't see a way to get it, not a direct way at least.
 Maybe we can use the size of the in-memory updates to guesstimate that 
 amount? Something like {{sizeOfInMemUpdates * (maxDoc/numUpdatedDocs)}}? Is 
 it a too wild guess?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-04-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5473:


Attachment: SOLR-5473-74.patch

Thanks Noble.

There were a few conflicts in CollectionsAPIDistributedZKTest which I have 
fixed in this patch. I also introduced a system property 
tests.solr.stateFormat which sets the stateFormat to be used for the default 
collection. If this property is not set then the state format is chosen 
randomly.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, ec2-23-20-119-52_solr.log, 
 ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Lucene / Solr 4.7.2 (take two)

2014-04-10 Thread Robert Muir
artifacts are here:

http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/

here is my +1
SUCCESS! [0:46:25.014499]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-5592:
---

 Summary: Incorrectly reported uncloseable files.
 Key: LUCENE-5592
 URL: https://issues.apache.org/jira/browse/LUCENE-5592
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss


As pointed out by Uwe, something dodgy is going on with unremovable file 
detection because they seem to cross a suite boundary, as in.
{code}
// trunk
svn update -r1586300
cd lucene\core
ant clean test -Dtests.directory=SimpleFSDirectory
{code}

{code}
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
   [junit4]   2 NOTE: test params are: 
codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=88), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=88)), sim=DefaultSimilarity, locale=en_MT, timezone=America/Menominee
   [junit4]   2 NOTE: Windows 7 6.1 amd64/Oracle Corporation 1.7.0_03 
(64-bit)/cpus=8,threads=1,free=209293024,total=342491136
   [junit4]   2 NOTE: All tests run in this JVM: [TestNoMergePolicy, 
TestPriorityQueue, TestBagOfPositions, TestSpans, TestNRTThreads, 
TestIndexWriterExceptions, TestSimpleAttributeImpl, TestAtomicUpdate, 
TestStressAdvance, Nested1, TestCharsRef, TestBlockPostingsFormat3, 
TestMultiFields, TestDocumentWriter, TestTwoPhaseCommitTool, 
TestCompiledAutomaton, TestNRTReaderWithThreads, TestTransactionRollback, 
TestSearchAfter, TestTermVectorsFormat, TestParallelCompositeReader, 
TestTermVectorsWriter, TestNearSpansOrdered, TestFilterAtomicReader, 
TestMultiTermQueryRewrites, TestLongPostings, TestThreadedForceMerge, TestLock, 
Nested, TestPrefixFilter, TestTermRangeQuery, TestFieldCache, 
TestRecyclingByteBlockAllocator, TestTerm, Test2BPositions, TestArrayUtil, 
Nested1, TestSpanSearchEquivalence]
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestSpanSearchEquivalence -Dtests.seed=8886562EBCD30121 
-Dtests.slow=true -Dtests.directory=SimpleFSDirectory -Dtests.locale=en_MT 
-Dtests.timezone=America/Menominee -Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) 
   [junit4] Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
   [junit4]at 
__randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
   [junit4]at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
   [junit4]at 
org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
   [junit4]at java.lang.Thread.run(Thread.java:722)
   [junit4] Completed on J1 in 0.41s, 8 tests, 1 error  FAILURES!
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5592:


Description: 
As pointed out by Uwe, something dodgy is going on with unremovable file 
detection because they seem to cross a suite boundary, as in.
{code}
// trunk
svn update -r1586300
cd lucene\core
ant clean test -Dtests.directory=SimpleFSDirectory
{code}

{code}
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
...
   [junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) 
   [junit4] Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
   [junit4]at 
__randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
   [junit4]at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
   [junit4]at 
org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
   [junit4]at java.lang.Thread.run(Thread.java:722)
   [junit4] Completed on J1 in 0.41s, 8 tests, 1 error  FAILURES!
{code}

  was:
As pointed out by Uwe, something dodgy is going on with unremovable file 
detection because they seem to cross a suite boundary, as in.
{code}
// trunk
svn update -r1586300
cd lucene\core
ant clean test -Dtests.directory=SimpleFSDirectory
{code}

{code}
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
   [junit4]   2 NOTE: test params are: 
codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=88), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=88)), sim=DefaultSimilarity, locale=en_MT, timezone=America/Menominee
   [junit4]   2 NOTE: Windows 7 6.1 amd64/Oracle Corporation 1.7.0_03 
(64-bit)/cpus=8,threads=1,free=209293024,total=342491136
   [junit4]   2 NOTE: All tests run in this JVM: [TestNoMergePolicy, 
TestPriorityQueue, TestBagOfPositions, TestSpans, TestNRTThreads, 
TestIndexWriterExceptions, TestSimpleAttributeImpl, TestAtomicUpdate, 
TestStressAdvance, Nested1, TestCharsRef, TestBlockPostingsFormat3, 
TestMultiFields, TestDocumentWriter, TestTwoPhaseCommitTool, 
TestCompiledAutomaton, TestNRTReaderWithThreads, TestTransactionRollback, 
TestSearchAfter, TestTermVectorsFormat, TestParallelCompositeReader, 
TestTermVectorsWriter, TestNearSpansOrdered, TestFilterAtomicReader, 
TestMultiTermQueryRewrites, TestLongPostings, TestThreadedForceMerge, TestLock, 
Nested, TestPrefixFilter, TestTermRangeQuery, TestFieldCache, 
TestRecyclingByteBlockAllocator, TestTerm, Test2BPositions, TestArrayUtil, 
Nested1, TestSpanSearchEquivalence]
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestSpanSearchEquivalence -Dtests.seed=8886562EBCD30121 
-Dtests.slow=true -Dtests.directory=SimpleFSDirectory -Dtests.locale=en_MT 
-Dtests.timezone=America/Menominee -Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) 
   [junit4] Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
   [junit4]
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
   [junit4]   

[jira] [Commented] (SOLR-4787) Join Contrib

2014-04-10 Thread Kranti Parisa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965413#comment-13965413
 ] 

Kranti Parisa commented on SOLR-4787:
-

Arul, thanks for posting the findings.

I don't think LONG fields are supported by bjoin.

 Join Contrib
 

 Key: SOLR-4787
 URL: https://issues.apache.org/jira/browse/SOLR-4787
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 4.2.1
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 4.8

 Attachments: SOLR-4787-deadlock-fix.patch, 
 SOLR-4787-pjoin-long-keys.patch, SOLR-4787.patch, SOLR-4787.patch, 
 SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
 SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
 SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
 SOLR-4797-hjoin-multivaluekeys-nestedJoins.patch, 
 SOLR-4797-hjoin-multivaluekeys-trunk.patch


 This contrib provides a place where different join implementations can be 
 contributed to Solr. This contrib currently includes 3 join implementations. 
 The initial patch was generated from the Solr 4.3 tag. Because of changes in 
 the FieldCache API this patch will only build with Solr 4.2 or above.
 *HashSetJoinQParserPlugin aka hjoin*
 The hjoin provides a join implementation that filters results in one core 
 based on the results of a search in another core. This is similar in 
 functionality to the JoinQParserPlugin but the implementation differs in a 
 couple of important ways.
 The first way is that the hjoin is designed to work with int and long join 
 keys only. So, in order to use hjoin, int or long join keys must be included 
 in both the to and from core.
 The second difference is that the hjoin builds memory structures that are 
 used to quickly connect the join keys. So, the hjoin will need more memory 
 then the JoinQParserPlugin to perform the join.
 The main advantage of the hjoin is that it can scale to join millions of keys 
 between cores and provide sub-second response time. The hjoin should work 
 well with up to two million results from the fromIndex and tens of millions 
 of results from the main query.
 The hjoin supports the following features:
 1) Both lucene query and PostFilter implementations. A *cost*  99 will 
 turn on the PostFilter. The PostFilter will typically outperform the Lucene 
 query when the main query results have been narrowed down.
 2) With the lucene query implementation there is an option to build the 
 filter with threads. This can greatly improve the performance of the query if 
 the main query index is very large. The threads parameter turns on 
 threading. For example *threads=6* will use 6 threads to build the filter. 
 This will setup a fixed threadpool with six threads to handle all hjoin 
 requests. Once the threadpool is created the hjoin will always use it to 
 build the filter. Threading does not come into play with the PostFilter.
 3) The *size* local parameter can be used to set the initial size of the 
 hashset used to perform the join. If this is set above the number of results 
 from the fromIndex then the you can avoid hashset resizing which improves 
 performance.
 4) Nested filter queries. The local parameter fq can be used to nest a 
 filter query within the join. The nested fq will filter the results of the 
 join query. This can point to another join to support nested joins.
 5) Full caching support for the lucene query implementation. The filterCache 
 and queryResultCache should work properly even with deep nesting of joins. 
 Only the queryResultCache comes into play with the PostFilter implementation 
 because PostFilters are not cacheable in the filterCache.
 The syntax of the hjoin is similar to the JoinQParserPlugin except that the 
 plugin is referenced by the string hjoin rather then join.
 fq=\{!hjoin fromIndex=collection2 from=id_i to=id_i threads=6 
 fq=$qq\}user:customer1qq=group:5
 The example filter query above will search the fromIndex (collection2) for 
 user:customer1 applying the local fq parameter to filter the results. The 
 lucene filter query will be built using 6 threads. This query will generate a 
 list of values from the from field that will be used to filter the main 
 query. Only records from the main query, where the to field is present in 
 the from list will be included in the results.
 The solrconfig.xml in the main query core must contain the reference to the 
 hjoin.
 queryParser name=hjoin 
 class=org.apache.solr.joins.HashSetJoinQParserPlugin/
 And the join contrib lib jars must be registed in the solrconfig.xml.
  lib dir=../../../contrib/joins/lib regex=.*\.jar /
 After issuing the ant dist command from inside the solr directory the joins 
 contrib jar will appear in the solr/dist directory. Place the the 

[jira] [Commented] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965429#comment-13965429
 ] 

Dawid Weiss commented on LUCENE-5592:
-

Oh, it's a silly, silly bug. I'll clean up the code as part of this though.

 Incorrectly reported uncloseable files.
 ---

 Key: LUCENE-5592
 URL: https://issues.apache.org/jira/browse/LUCENE-5592
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss

 As pointed out by Uwe, something dodgy is going on with unremovable file 
 detection because they seem to cross a suite boundary, as in.
 {code}
 // trunk
 svn update -r1586300
 cd lucene\core
 ant clean test -Dtests.directory=SimpleFSDirectory
 {code}
 {code}
[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
 ...
[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) 
[junit4] Throwable #1: java.io.IOException: Could not remove the 
 following files (in the order of attempts):
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
[junit4]  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
[junit4]  at java.lang.Thread.run(Thread.java:722)
[junit4] Completed on J1 in 0.41s, 8 tests, 1 error  FAILURES!
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965448#comment-13965448
 ] 

ASF subversion and git services commented on LUCENE-5592:
-

Commit 1586337 from dwe...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1586337 ]

LUCENE-5592: Incorrectly reported uncloseable files.

 Incorrectly reported uncloseable files.
 ---

 Key: LUCENE-5592
 URL: https://issues.apache.org/jira/browse/LUCENE-5592
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss

 As pointed out by Uwe, something dodgy is going on with unremovable file 
 detection because they seem to cross a suite boundary, as in.
 {code}
 // trunk
 svn update -r1586300
 cd lucene\core
 ant clean test -Dtests.directory=SimpleFSDirectory
 {code}
 {code}
[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
 ...
[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) 
[junit4] Throwable #1: java.io.IOException: Could not remove the 
 following files (in the order of attempts):
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
[junit4]  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
[junit4]  at java.lang.Thread.run(Thread.java:722)
[junit4] Completed on J1 in 0.41s, 8 tests, 1 error  FAILURES!
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965451#comment-13965451
 ] 

ASF subversion and git services commented on LUCENE-5592:
-

Commit 1586338 from dwe...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1586338 ]

LUCENE-5592: Incorrectly reported uncloseable files.

 Incorrectly reported uncloseable files.
 ---

 Key: LUCENE-5592
 URL: https://issues.apache.org/jira/browse/LUCENE-5592
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss
 Fix For: 4.8, 5.0


 As pointed out by Uwe, something dodgy is going on with unremovable file 
 detection because they seem to cross a suite boundary, as in.
 {code}
 // trunk
 svn update -r1586300
 cd lucene\core
 ant clean test -Dtests.directory=SimpleFSDirectory
 {code}
 {code}
[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
 ...
[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) 
[junit4] Throwable #1: java.io.IOException: Could not remove the 
 following files (in the order of attempts):
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
[junit4]  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
[junit4]  at java.lang.Thread.run(Thread.java:722)
[junit4] Completed on J1 in 0.41s, 8 tests, 1 error  FAILURES!
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5592:


Fix Version/s: 5.0
   4.8

 Incorrectly reported uncloseable files.
 ---

 Key: LUCENE-5592
 URL: https://issues.apache.org/jira/browse/LUCENE-5592
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss
 Fix For: 4.8, 5.0


 As pointed out by Uwe, something dodgy is going on with unremovable file 
 detection because they seem to cross a suite boundary, as in.
 {code}
 // trunk
 svn update -r1586300
 cd lucene\core
 ant clean test -Dtests.directory=SimpleFSDirectory
 {code}
 {code}
[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
 ...
[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) 
[junit4] Throwable #1: java.io.IOException: Could not remove the 
 following files (in the order of attempts):
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
[junit4]  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
[junit4]  at java.lang.Thread.run(Thread.java:722)
[junit4] Completed on J1 in 0.41s, 8 tests, 1 error  FAILURES!
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5592.
-

Resolution: Fixed

 Incorrectly reported uncloseable files.
 ---

 Key: LUCENE-5592
 URL: https://issues.apache.org/jira/browse/LUCENE-5592
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss

 As pointed out by Uwe, something dodgy is going on with unremovable file 
 detection because they seem to cross a suite boundary, as in.
 {code}
 // trunk
 svn update -r1586300
 cd lucene\core
 ant clean test -Dtests.directory=SimpleFSDirectory
 {code}
 {code}
[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
 ...
[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) 
[junit4] Throwable #1: java.io.IOException: Could not remove the 
 following files (in the order of attempts):
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
[junit4]
 C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
[junit4]  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
[junit4]  at java.lang.Thread.run(Thread.java:722)
[junit4] Completed on J1 in 0.41s, 8 tests, 1 error  FAILURES!
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965464#comment-13965464
 ] 

Shawn Heisey commented on LUCENE-5590:
--

There is no support in a completely vanilla Windows system for extracting a 
tarfile, gzipped or not.  It requires installing additional software, and some 
people work in tightly controlled environments where they cannot install 
anything.  For people who work in that kind of environment, getting a piece of 
software approved is a process that may take months, and if they are caught 
subverting security mechanisms to use an unapproved program, their employment 
could be terminated.


 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965475#comment-13965475
 ] 

Robert Muir commented on LUCENE-5590:
-

I dont think this argument applies: you already cannot use this software on a 
completely vanilla windows system etc. You must at least install a JVM to do 
anything with it.

 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.2 (take two)

2014-04-10 Thread Michael McCandless
+1

SUCCESS! [0:46:33.654703]


Mike McCandless

http://blog.mikemccandless.com


On Thu, Apr 10, 2014 at 10:51 AM, Robert Muir rcm...@gmail.com wrote:
 artifacts are here:

 http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/

 here is my +1
 SUCCESS! [0:46:25.014499]

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965524#comment-13965524
 ] 

Shawn Heisey commented on LUCENE-5590:
--

Someone who is interested in evaluating Solr and is on a Windows machine is 
likely to simply move on to another solution like ElasticSearch if they cannot 
find a .zip download.  Or were you just talking about Lucene itself?

I personally will be OK.  I don't run actual indexes (Solr) on Windows, but I 
download the .zip fairly frequently because my own computer where I do 
development work runs Windows 7.  I know what to do, and I don't have any 
restrictions on what I can install.  There will be people who look at a .tgz 
and have no idea what to do with it, and others who will be unable to install 
the required software.


 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965536#comment-13965536
 ] 

Noble Paul commented on SOLR-5340:
--

does it mean that a user will have to fire a request to all nodes where this 
collection is running? It is complex 
How can this be a single collection admin command where you can say backup a 
collection with some name and the system can identify the nodes and fire 
separate requests to each node 

 Add support for named snapshots
 ---

 Key: SOLR-5340
 URL: https://issues.apache.org/jira/browse/SOLR-5340
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5
Reporter: Mike Schrag
 Attachments: SOLR-5340.patch


 It would be really nice if Solr supported named snapshots. Right now if you 
 snapshot a SolrCloud cluster, every node potentially records a slightly 
 different timestamp. Correlating those back together to effectively restore 
 the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965536#comment-13965536
 ] 

Noble Paul edited comment on SOLR-5340 at 4/10/14 4:56 PM:
---

does it mean that a user will have to fire a request to all nodes where this 
collection is running? It is complex 
can this be a single collection admin command where you can say specify  a 
collection name + snapshot name and the system can identify the nodes and fire 
separate requests to each node .

I should be able to do the restore also similarly. Working with individual 
nodes should be discouraged as much as possible 


was (Author: noble.paul):
does it mean that a user will have to fire a request to all nodes where this 
collection is running? It is complex 
How can this be a single collection admin command where you can say backup a 
collection with some name and the system can identify the nodes and fire 
separate requests to each node 

 Add support for named snapshots
 ---

 Key: SOLR-5340
 URL: https://issues.apache.org/jira/browse/SOLR-5340
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5
Reporter: Mike Schrag
 Attachments: SOLR-5340.patch


 It would be really nice if Solr supported named snapshots. Right now if you 
 snapshot a SolrCloud cluster, every node potentially records a slightly 
 different timestamp. Correlating those back together to effectively restore 
 the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965541#comment-13965541
 ] 

Varun Thacker commented on SOLR-5340:
-

I should have been more clear I guess. This was the approach I had planned to 
take -

1. Use this Jira to add the ability for named snapshots/backups. This would be 
at a core level and thus could be used by non SolrCloud users also.
2.  In SOLR-5750 work on providing a seamless backup collection and restore 
collection API. it would utilise the work done on this Jira.


 Add support for named snapshots
 ---

 Key: SOLR-5340
 URL: https://issues.apache.org/jira/browse/SOLR-5340
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5
Reporter: Mike Schrag
 Attachments: SOLR-5340.patch


 It would be really nice if Solr supported named snapshots. Right now if you 
 snapshot a SolrCloud cluster, every node potentially records a slightly 
 different timestamp. Correlating those back together to effectively restore 
 the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965543#comment-13965543
 ] 

Hoss Man commented on SOLR-5948:


Some speculation these failures may have been caused by LUCENE-5574 ... but i'm 
not sure, i don't fully understand the scope of that bug and if it could have 
lead to a situation where _some_ (but not all) of the index files got deleted 
out from under the reader.

 Strange jenkins failure: *.si file not found in the middle of cloud test
 

 Key: SOLR-5948
 URL: https://issues.apache.org/jira/browse/SOLR-5948
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-5948.jenkins.log.txt, 
 jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965548#comment-13965548
 ] 

Michael McCandless commented on LUCENE-5590:


Maybe we should ship the .zip and not the .tgz?  Is .zip more universal?  The 
.zip compression is a bit worse ... ~16% larger with the 4.7.1 release.

 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965558#comment-13965558
 ] 

Shawn Heisey commented on LUCENE-5590:
--

If it came to an official vote, mine would be -0.  I don't oppose this strongly 
enough to block it, I just think it's a bad idea.


 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965561#comment-13965561
 ] 

Shawn Heisey commented on LUCENE-5590:
--

bq. Maybe we should ship the .zip and not the .tgz?

This might work.  'unzip' is a standard program on every *NIX machine that I 
use regularly.

 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5589) release artifacts are too large.

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965564#comment-13965564
 ] 

Shawn Heisey commented on LUCENE-5589:
--

My big concern with the release artifacts is user download time.  The Solr 
binary download is *HUGE* ... whenever I need to download a binary release to 
update my custom projects, I dread doing so when I'm at home where bandwidth is 
limited.

Solr's competition includes ElasticSearch.  Their .zip download is 21.6MB and 
the .tar.gz is even smaller.  Solr's .war file is larger than either, and 
that's just the tip of the iceberg.  There's a lot more 'stuff' in a Solr 
download, but the majority of users don't need that stuff.  Why should they 
download it unless they need it?


 release artifacts are too large.
 

 Key: LUCENE-5589
 URL: https://issues.apache.org/jira/browse/LUCENE-5589
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 Doing a release currently products *600MB* of artifacts. This is unwieldy...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965597#comment-13965597
 ] 

Robert Muir commented on LUCENE-5590:
-

the current artifacts are 600MB in size. This is an easy way to attack this 
problem.

Maybe i shouldnt have described the issue as removing something (since its 
not a mandatory part of an apache release), instead as don't create 
convenience binaries twice.

 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965615#comment-13965615
 ] 

Michael McCandless commented on SOLR-5948:
--

The corruption case for LUCENE-5574 is quite narrow: something (e.g. 
replication) has to copy over index files that replace previous used filenames.

Lucene itself never does this (it's write once), but if e.g. these tests can 
overwrite pre-existing filenames then it could explain it.

 Strange jenkins failure: *.si file not found in the middle of cloud test
 

 Key: SOLR-5948
 URL: https://issues.apache.org/jira/browse/SOLR-5948
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-5948.jenkins.log.txt, 
 jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5593) javadocs generation in release tasks: painfully slow

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5593:
---

 Summary: javadocs generation in release tasks: painfully slow
 Key: LUCENE-5593
 URL: https://issues.apache.org/jira/browse/LUCENE-5593
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Something is wrong here in the way this generation works, I see some of the 
same javadocs being generated over and over and over again. 

The current ant tasks seem to have a O(n!) runtime with respect to how many 
modules we have: its obnoxiously slow on a non-beast computer. There is a bug 
here...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5594) don't call 'svnversion' over and over in the build

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5594:
---

 Summary: don't call 'svnversion' over and over in the build
 Key: LUCENE-5594
 URL: https://issues.apache.org/jira/browse/LUCENE-5594
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Some ant tasks (at least release packaging, i dunno what else), call svnversion 
over and over and over for each module in the build. can we just do this one 
time instead?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965645#comment-13965645
 ] 

Robert Muir commented on LUCENE-5590:
-

{quote}
 The .zip compression is a bit worse ... ~16% larger with the 4.7.1 release.
{quote}

it seems to already be set optimally, i set it to level 9 and got essentially 
the same size binary package.


 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965648#comment-13965648
 ] 

Hoss Man commented on SOLR-5948:


bq. The corruption case for LUCENE-5574 is quite narrow: something (e.g. 
replication) has to copy over index files that replace previous used filenames.

That's certainly possible in these tests -- in both of the attached logs Solr's 
SnapPuller was used by a replica to get caught up with it's leader just prior 
to encountering the FileNotFoundExceptions

 Strange jenkins failure: *.si file not found in the middle of cloud test
 

 Key: SOLR-5948
 URL: https://issues.apache.org/jira/browse/SOLR-5948
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-5948.jenkins.log.txt, 
 jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965663#comment-13965663
 ] 

Michael McCandless commented on SOLR-5948:
--

OK that's good news :)  Cross fingers...

 Strange jenkins failure: *.si file not found in the middle of cloud test
 

 Key: SOLR-5948
 URL: https://issues.apache.org/jira/browse/SOLR-5948
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-5948.jenkins.log.txt, 
 jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965681#comment-13965681
 ] 

ASF subversion and git services commented on LUCENE-5588:
-

Commit 1586407 from uschind...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1586407 ]

LUCENE-5588: Lucene now calls fsync() on the index directory, ensuring that all 
file metadata is persisted on disk in case of power failure.

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965684#comment-13965684
 ] 

ASF subversion and git services commented on LUCENE-5588:
-

Commit 1586410 from uschind...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1586410 ]

Merged revision(s) 1586407 from lucene/dev/trunk:
LUCENE-5588: Lucene now calls fsync() on the index directory, ensuring that all 
file metadata is persisted on disk in case of power failure.

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5588.
---

Resolution: Fixed

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965690#comment-13965690
 ] 

Uwe Schindler commented on SOLR-5948:
-

Hi,
do you want to fix this for 4.8? If yes, please set to blocker, otherwise I 
will soon create the release branch!
Uwe

 Strange jenkins failure: *.si file not found in the middle of cloud test
 

 Key: SOLR-5948
 URL: https://issues.apache.org/jira/browse/SOLR-5948
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-5948.jenkins.log.txt, 
 jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/

1 tests failed.
REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries

Error Message:
java.util.concurrent.ExecutionException: java.lang.NullPointerException

Stack Trace:
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
at 
org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
at 
org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
at 
org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
at 
org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Chris Hostetter

FWIW: reproduce line does not reproduce for me.

: Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
: From: buil...@flonkings.com
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
: Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 -
: Failure!
: 
: Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
: 
: 1 tests failed.
: REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
: 
: Error Message:
: java.util.concurrent.ExecutionException: java.lang.NullPointerException
: 
: Stack Trace:
: java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
:   at 
__randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
:   at 
org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
:   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
:   at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
:   at 
org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
:   at 
org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
:   at 
org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:606)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
:   at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
:   at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:   at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
:   at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
:   at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
:   at 

Re: [VOTE] Lucene / Solr 4.7.2 (take two)

2014-04-10 Thread Steve Rowe
+1

SUCCESS! [1:05:26.776253]

On Apr 10, 2014, at 8:51 AM, Robert Muir rcm...@gmail.com wrote:

 artifacts are here:
 
 http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/
 
 here is my +1
 SUCCESS! [0:46:25.014499]
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.2 (take two)

2014-04-10 Thread Chris Hostetter

: http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/

+1 to the artifacts with these SHAs...

47ee3825d8c0e0c67f7e17d84c9e8f8a896ccf7b *lucene-4.7.2-src.tgz
d5bee6e4245b8ba4cf7ecf3660b4b71dc4dd8471 *lucene-4.7.2.tgz
5a54e386b0284fc90fd9804979a80913e33a74df *lucene-4.7.2.zip
169470a771a3a5cc7283f77f3ddbb739bf4a0cc6 *solr-4.7.2-src.tgz
5576cf3931beb05baecaad82a5783afb6dc8d490 *solr-4.7.2.tgz
7e7bd18a02be6619190845624c889b1571de3821 *solr-4.7.2.zip



hossman@frisbee:~/tmp/4.7.2$ python3.2 
~/lucene/branch_4_7/dev-tools/scripts/smokeTestRelease.py 
http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/ 
1586229 4.7.2 RC2
...
SUCCESS! [1:03:25.017040]




-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Dawid Weiss
[junit4] # JRE version: 7.0_25-b15
[junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
mode linux-amd64 compressed oops)

Simon, is there a chance you could update your JVM? This one is quite
old; if we ran on a newer one we could
ping Oracle to see into the issue.

Dawid

On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:

 FWIW: reproduce line does not reproduce for me.

 : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
 : From: buil...@flonkings.com
 : Reply-To: dev@lucene.apache.org
 : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
 : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 -
 : Failure!
 :
 : Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
 :
 : 1 tests failed.
 : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
 :
 : Error Message:
 : java.util.concurrent.ExecutionException: java.lang.NullPointerException
 :
 : Stack Trace:
 : java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.NullPointerException
 :   at 
 __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
 :   at 
 org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
 :   at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
 :   at 
 org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
 :   at 
 org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
 :   at 
 org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
 :   at 
 org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
 :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 :   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 :   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 :   at java.lang.reflect.Method.invoke(Method.java:606)
 :   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 :   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 :   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 :   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 :   at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 :   at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 :   at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 :   at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 :   at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 :   at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 :   at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 :   at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 :   at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 :   at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 :   at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 :   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 :   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 :   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 :   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 :   at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 :   at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 :   at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 :   at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 :   at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 :   at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 :   at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 :   at 
 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Anshum Gupta
I think the reason why he's still running that version is
https://issues.apache.org/jira/browse/LUCENE-5212.




On Thu, Apr 10, 2014 at 2:16 PM, Dawid Weiss
dawid.we...@cs.put.poznan.plwrote:

 [junit4] # JRE version: 7.0_25-b15
 [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
 mode linux-amd64 compressed oops)

 Simon, is there a chance you could update your JVM? This one is quite
 old; if we ran on a newer one we could
 ping Oracle to see into the issue.

 Dawid

 On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
 hossman_luc...@fucit.org wrote:
 
  FWIW: reproduce line does not reproduce for me.
 
  : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
  : From: buil...@flonkings.com
  : Reply-To: dev@lucene.apache.org
  : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
  : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build #
 82046 -
  : Failure!
  :
  : Build:
 builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
  :
  : 1 tests failed.
  : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
  :
  : Error Message:
  : java.util.concurrent.ExecutionException: java.lang.NullPointerException
  :
  : Stack Trace:
  : java.lang.RuntimeException: java.util.concurrent.ExecutionException:
 java.lang.NullPointerException
  :   at
 __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
  :   at
 org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
  :   at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
  :   at
 org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
  :   at
 org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
  :   at
 org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
  :   at
 org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
  :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  :   at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  :   at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  :   at java.lang.reflect.Method.invoke(Method.java:606)
  :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
  :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
  :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
  :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
  :   at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
  :   at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
  :   at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
  :   at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
  :   at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
  :   at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
  :   at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
  :   at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  :   at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
  :   at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
  :   at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
  :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
  :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
  :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
  :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
  :   at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
  :   at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
  :   at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
  :   at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
  :   at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
  :   at
 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Dawid Weiss
But that issue has been fixed (supposedly)?
D.

On Thu, Apr 10, 2014 at 11:27 PM, Anshum Gupta ans...@anshumgupta.net wrote:
 I think the reason why he's still running that version is
 https://issues.apache.org/jira/browse/LUCENE-5212.




 On Thu, Apr 10, 2014 at 2:16 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl
 wrote:

 [junit4] # JRE version: 7.0_25-b15
 [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
 mode linux-amd64 compressed oops)

 Simon, is there a chance you could update your JVM? This one is quite
 old; if we ran on a newer one we could
 ping Oracle to see into the issue.

 Dawid

 On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
 hossman_luc...@fucit.org wrote:
 
  FWIW: reproduce line does not reproduce for me.
 
  : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
  : From: buil...@flonkings.com
  : Reply-To: dev@lucene.apache.org
  : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
  : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build #
  82046 -
  : Failure!
  :
  : Build:
  builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
  :
  : 1 tests failed.
  : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
  :
  : Error Message:
  : java.util.concurrent.ExecutionException:
  java.lang.NullPointerException
  :
  : Stack Trace:
  : java.lang.RuntimeException: java.util.concurrent.ExecutionException:
  java.lang.NullPointerException
  :   at
  __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
  :   at
  org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
  :   at
  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
  :   at
  org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
  :   at
  org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
  :   at
  org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
  :   at
  org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
  :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  :   at
  sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  :   at
  sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  :   at java.lang.reflect.Method.invoke(Method.java:606)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
  :   at
  org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
  :   at
  org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
  :   at
  org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
  :   at
  com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
  :   at
  org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
  :   at
  org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
  :   at
  org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
  :   at
  com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  :   at
  com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
  :   at
  com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
  :   at
  com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
  :   at
  org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
  :   at
  org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
  :   at
  com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
  :   at
  

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0) - Build # 10017 - Failure!

2014-04-10 Thread Robert Muir
I committed a fix.

On Sun, Apr 6, 2014 at 9:32 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10017/
 Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testNFCHuge

 Error Message:
 term 0 expected:...?睶Qܻǹ䃓̇o곝ꀪ߿곝�̾䄴泴.[͈곝�퉃]殧곝�퉃؝㣓�ᇝ䤣7�ľD儹�쪑?... but 
 was:...?睶Qܻǹ䃓̇o곝�퉃곝ꀪ߿곝�퉃곝�̾䄴泴.[곝�퉃곝�퉃͈]殧곝�퉃곝�퉃؝㣓�ᇝ䤣7�ľD儹�쪑?...

 Stack Trace:
 org.junit.ComparisonFailure: term 0 
 expected:...?睶Qܻǹ䃓̇o곝ꀪ߿�̾䄴泴.[͈퉃]殧؝㣓�ᇝ䤣7�ľD儹�쪑?... but 
 was:...?睶Qܻǹ䃓̇o곝ꀪ߿�̾䄴泴.[퉃͈]殧؝㣓�ᇝ䤣7�ľD儹�쪑?...
 at 
 __randomizedtesting.SeedInfo.seed([70B714E14A099CA3:7E48F2705D28916E]:0)
 at org.junit.Assert.assertEquals(Assert.java:125)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:179)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:294)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:298)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:302)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:352)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:361)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkOneTerm(BaseTokenStreamTestCase.java:425)
 at 
 org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.doTestMode(TestICUNormalizer2CharFilter.java:130)
 at 
 org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testNFCHuge(TestICUNormalizer2CharFilter.java:139)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Robert Muir
no released version of java7 yet has the fix.

I use update 25 too. i think its a good one to test since its the only
safe version you can currently use.

On Thu, Apr 10, 2014 at 3:30 PM, Dawid Weiss
dawid.we...@cs.put.poznan.pl wrote:
 But that issue has been fixed (supposedly)?
 D.

 On Thu, Apr 10, 2014 at 11:27 PM, Anshum Gupta ans...@anshumgupta.net wrote:
 I think the reason why he's still running that version is
 https://issues.apache.org/jira/browse/LUCENE-5212.




 On Thu, Apr 10, 2014 at 2:16 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl
 wrote:

 [junit4] # JRE version: 7.0_25-b15
 [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
 mode linux-amd64 compressed oops)

 Simon, is there a chance you could update your JVM? This one is quite
 old; if we ran on a newer one we could
 ping Oracle to see into the issue.

 Dawid

 On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
 hossman_luc...@fucit.org wrote:
 
  FWIW: reproduce line does not reproduce for me.
 
  : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
  : From: buil...@flonkings.com
  : Reply-To: dev@lucene.apache.org
  : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
  : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build #
  82046 -
  : Failure!
  :
  : Build:
  builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
  :
  : 1 tests failed.
  : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
  :
  : Error Message:
  : java.util.concurrent.ExecutionException:
  java.lang.NullPointerException
  :
  : Stack Trace:
  : java.lang.RuntimeException: java.util.concurrent.ExecutionException:
  java.lang.NullPointerException
  :   at
  __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
  :   at
  org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
  :   at
  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
  :   at
  org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
  :   at
  org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
  :   at
  org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
  :   at
  org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
  :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  :   at
  sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  :   at
  sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  :   at java.lang.reflect.Method.invoke(Method.java:606)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
  :   at
  org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
  :   at
  org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
  :   at
  org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
  :   at
  com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
  :   at
  org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
  :   at
  org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
  :   at
  org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
  :   at
  com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  :   at
  com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
  :   at
  com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
  :   at
  com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
  :   at
  com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
  :   at
  org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
  :   at
  org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
  :   at
  

[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965902#comment-13965902
 ] 

Upayavira commented on LUCENE-5590:
---

What is the problem that this is attempting to solve? It seems that providing 
tgz and zip distributions is maximising our ability to help our users.

Unless there is a really good reason to change what we do, then this seems like 
a sure-fire way to annoy one half of our users.

 remove .zip binary artifacts
 

 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir

 It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5588:
--

Attachment: LUCENE-5588-nonexistfix.patch

Here a fix for the failures.

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
 LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-5588:
---


There is a problem in Solr: Solr sometimes tries to call FSDirectory.sync on a 
directory that doe snot even exits. This seems to happen when the index is 
empty and NRTCachingDirectory is used. In that case IndexWriter syncs with an 
empty file list.

The fix is to only sync the directory itsself if any file inside it was synced 
before. Otherwise it is not needed to sync at all.

We should fix this behaviour in the future. Maybe the directory should be 
created before so it always exists?

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
 LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0) - Build # 9899 - Failure!

2014-04-10 Thread Robert Muir
this was the same buffering bug, its fixed.

On Sat, Apr 5, 2014 at 10:15 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9899/
 Java: 32bit/jdk1.8.0 -client -XX:+UseSerialGC

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings

 Error Message:
 term 450 expected:?[?਍] but was:?[?਍ਹ]

 Stack Trace:
 org.junit.ComparisonFailure: term 450 expected:?[?਍] but was:?[?ਹ]
 at 
 __randomizedtesting.SeedInfo.seed([B4100D1094B8E1C3:3C990DAE37BCB6F6]:0)
 at org.junit.Assert.assertEquals(Assert.java:125)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:179)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:294)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:298)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:857)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:612)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:435)
 at 
 org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:202)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 

Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b10) - Build # 9897 - Failure!

2014-04-10 Thread Robert Muir
this is a different bug, in this case the offsets computation is wrong.
i'll open an issue.

On Sat, Apr 5, 2014 at 6:37 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9897/
 Java: 32bit/jdk1.7.0_60-ea-b10 -server -XX:+UseSerialGC

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings

 Error Message:
 startOffset 107 expected:587 but was:588

 Stack Trace:
 java.lang.AssertionError: startOffset 107 expected:587 but was:588
 at 
 __randomizedtesting.SeedInfo.seed([19423CE8988D3E11:91CB3C563B896924]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:181)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:294)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:298)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:857)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:612)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
 at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:435)
 at 
 org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:186)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965908#comment-13965908
 ] 

ASF subversion and git services commented on LUCENE-5588:
-

Commit 1586475 from uschind...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1586475 ]

LUCENE-5588: Workaround for fsyncing non-existing directory

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
 LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965909#comment-13965909
 ] 

ASF subversion and git services commented on LUCENE-5588:
-

Commit 1586476 from uschind...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1586476 ]

Merged revision(s) 1586475 from lucene/dev/trunk:
LUCENE-5588: Workaround for fsyncing non-existing directory

 We should also fsync the directory when committing
 --

 Key: LUCENE-5588
 URL: https://issues.apache.org/jira/browse/LUCENE-5588
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
 LUCENE-5588.patch, LUCENE-5588.patch


 Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
 (LUCENE-5570), we can also fsync the directory (at least try to do it). 
 Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
 also open a directory: 
 http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >