[jira] [Commented] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966293#comment-13966293
 ] 

Noble Paul commented on SOLR-5340:
--

OK, I'm not sure if SOLR-5750 would be able to use this . Because , you would 
want to save all data in a single place for a given collection and only a 
single copy (not one per replica) . To save something to a single location will 
be difficult using this API. 

My point is , I'm not sure if this will be directly re-usable for SOLR-5750 . 
But as a standalone feature this is a low hanging fruit and it should be fine 
and it does not have to be linked with SOLR-5750

> Add support for named snapshots
> ---
>
> Key: SOLR-5340
> URL: https://issues.apache.org/jira/browse/SOLR-5340
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.5
>Reporter: Mike Schrag
> Attachments: SOLR-5340.patch
>
>
> It would be really nice if Solr supported named snapshots. Right now if you 
> snapshot a SolrCloud cluster, every node potentially records a slightly 
> different timestamp. Correlating those back together to effectively restore 
> the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Dawid Weiss
Java 8 GA should have it from what I see. It is a bit surprising that
the latest official release of Java SE still 7u51 doesn't. Thanks.

D.

On Thu, Apr 10, 2014 at 11:33 PM, Robert Muir  wrote:
> no released version of java7 yet has the fix.
>
> I use update 25 too. i think its a good one to test since its the only
> safe version you can currently use.
>
> On Thu, Apr 10, 2014 at 3:30 PM, Dawid Weiss
>  wrote:
>> But that issue has been fixed (supposedly)?
>> D.
>>
>> On Thu, Apr 10, 2014 at 11:27 PM, Anshum Gupta  
>> wrote:
>>> I think the reason why he's still running that version is
>>> https://issues.apache.org/jira/browse/LUCENE-5212.
>>>
>>>
>>>
>>>
>>> On Thu, Apr 10, 2014 at 2:16 PM, Dawid Weiss 
>>> wrote:

 [junit4] # JRE version: 7.0_25-b15
 [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
 mode linux-amd64 compressed oops)

 Simon, is there a chance you could update your JVM? This one is quite
 old; if we ran on a newer one we could
 ping Oracle to see into the issue.

 Dawid

 On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
  wrote:
 >
 > FWIW: reproduce line does not reproduce for me.
 >
 > : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
 > : From: buil...@flonkings.com
 > : Reply-To: dev@lucene.apache.org
 > : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
 > : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build #
 > 82046 -
 > : Failure!
 > :
 > : Build:
 > builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
 > :
 > : 1 tests failed.
 > : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
 > :
 > : Error Message:
 > : java.util.concurrent.ExecutionException:
 > java.lang.NullPointerException
 > :
 > : Stack Trace:
 > : java.lang.RuntimeException: java.util.concurrent.ExecutionException:
 > java.lang.NullPointerException
 > :   at
 > __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
 > :   at
 > org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
 > :   at
 > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
 > :   at
 > org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
 > :   at
 > org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
 > :   at
 > org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
 > :   at
 > org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
 > :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 > :   at
 > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 > :   at
 > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 > :   at java.lang.reflect.Method.invoke(Method.java:606)
 > :   at
 > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 > :   at
 > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 > :   at
 > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 > :   at
 > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 > :   at
 > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 > :   at
 > org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 > :   at
 > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 > :   at
 > com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 > :   at
 > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 > :   at
 > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 > :   at
 > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 > :   at
 > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 > :   at
 > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 > :   at
 > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 > :   at
 > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 > :   at
 > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.j

Lucene/Solr 4.8 release branch created

2014-04-10 Thread Uwe Schindler
Hi,

I created the Lucene/Solr release branch for version 4.8: 
https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_4_8

Please only commit blocker issues to this branch! The version numbers in the 
current stable branch_4x were updated to Lucene 4.9. I will start the first RC 
during the next week.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5596) Support for index/search large numeric numbers

2014-04-10 Thread Kevin Wang (JIRA)
Kevin Wang created LUCENE-5596:
--

 Summary: Support for index/search large numeric numbers
 Key: LUCENE-5596
 URL: https://issues.apache.org/jira/browse/LUCENE-5596
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Kevin Wang


Currently if an number is larger than Long.MAX_VALUE, we can't index/search 
that in lucene as a number. For example, IPv6 address is an 128 bit number, so 
we can't index that as a numeric field and do numeric range query etc.

It would be good to support BigInteger / BigDecimal

I've tried use BigInteger for IPv6 in Elasticsearch and that works fine, but 
there are still lots of things to do
https://github.com/elasticsearch/elasticsearch/pull/5758





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-04-10 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966095#comment-13966095
 ] 

Steven Bower commented on SOLR-5963:


[~erickerickson] sounds good to me... I was planning on doing about the same 
thing so thanks for jumping on it..

> Finalize interface and backport analytics component to 4x
> -
>
> Key: SOLR-5963
> URL: https://issues.apache.org/jira/browse/SOLR-5963
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.9, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5963.patch
>
>
> Now that we seem to have fixed up the test failures for trunk for the 
> analytics component, we need to solidify the API and back-port it to 4x. For 
> history, see SOLR-5302 and SOLR-5488.
> As far as I know, these are the merges that need to occur to do this (plus 
> any that this JIRA brings up)
> svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
> The only remaining thing I think needs to be done is to solidify the 
> interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
> although SOLR-5488 is the most relevant one.
> [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
> interested here.
> I really want to put this to bed, so if we can get agreement on this soon I 
> can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966051#comment-13966051
 ] 

Upayavira commented on LUCENE-5590:
---

I agree with Anshum. Regarding Lucene zip, I have no opinion.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1586473 - /lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/ icu/ICUNormalizer2CharFilter.java

2014-04-10 Thread Robert Muir
dude i am just replying to test failures from jenkins.

I am not worried about helping people using unreleased code: they take
on that risk themselves.

I don't imagine myself creating jira issues for everytime jenkins
fails and i want to fix things, when its unnecessary.
Its not like anyone else is doing this either: why should I have to do more?

It seems like it should be enough that i debugged some fails on the
airplane, because I wanted to help out.

On Thu, Apr 10, 2014 at 5:48 PM, Chris Hostetter
 wrote:
>
> : the functionality is unreleased. So is it really interesting to anyone?
>
> Ah, ok ... in that case i would advocate labling this type of commit with
> the same initial Jira that added the class -- that way anyone looking at
> the jira and wanting to generate a patch (to backport for their personal
> usage in 4.7 for example) would have seen the additional commit needed to
> get it working properly.
>
> not a big a deal though.
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5589) release artifacts are too large.

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966026#comment-13966026
 ] 

Robert Muir commented on LUCENE-5589:
-

I guess it depends on how the RM does releasing.

For example I probably created like 10 release candidates this week, running 
the smoketester (file:/// URL, no network transfer required), doing other 
tests, etc etc. Once i've iterated until i'm happy, then i do the upload and 
call a vote.

So its somewhat like 'compile-test-debug'. I am concerned with a "jenkins" 
doing that that it would be slower: it puts network transfer right in the 
middle of this iteration loop.

> release artifacts are too large.
> 
>
> Key: LUCENE-5589
> URL: https://issues.apache.org/jira/browse/LUCENE-5589
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Doing a release currently products *600MB* of artifacts. This is unwieldy...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1586473 - /lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/ icu/ICUNormalizer2CharFilter.java

2014-04-10 Thread Chris Hostetter

: the functionality is unreleased. So is it really interesting to anyone?

Ah, ok ... in that case i would advocate labling this type of commit with 
the same initial Jira that added the class -- that way anyone looking at 
the jira and wanting to generate a patch (to backport for their personal 
usage in 4.7 for example) would have seen the additional commit needed to 
get it working properly.

not a big a deal though.


-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5589) release artifacts are too large.

2014-04-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966017#comment-13966017
 ] 

Hoss Man commented on LUCENE-5589:
--

bq. And at the end of the day, you still have the network transfer as i cant 
imagine not testing an RC before calling a vote.

For most people, their network "download" speed is much faster then the network 
"upload" speed. An approach like i'm suggesting would leverage the ASF infra 
hardware & network to do "server -> server" file copies whenever possible 
instead of requiring that a lot of "laptop -> server" copying like our current 
process involves.

by my count the RM currently has to "upload" the large release artifacts from 
their local net a minimum of 3 times: 
* push the whole RC to people.apache.org
* push the maven jars to the maven staging repo (not sure if the final maven 
publish involves a local copy or if we currently re-push)
* push the non-maven artifacts to dist.apache.org
...i'm suggesting that that can al be eliminated.


I *never* suggested that the RM wouldn't personal test out the RC before 
calling the vote -- that would of course still be the way we do things, and was 
covered in the steps i mentioned...

{quote}
* on the RM's local machine, he runs some "smoke-test-release.py 
--no-gpg-check" script pointed at the SVN URL of the release candidate
** this smoke-test-release.pl script will do an svn checkout, followed by all 
of the normal things that our existing release smoke checker does 
** (the "--no-gpg-check" is because the RM hasn't signed anything yet)
** if the smoke test script passes, then the RM, (on his local machine) runs a 
"sign-releases.py" script on the artifacts & svn commits the newly created 
*.asc files (or maybe the script does that automatically)
* the RM then sends out the email calling a vote on the RC 
{quote}

> release artifacts are too large.
> 
>
> Key: LUCENE-5589
> URL: https://issues.apache.org/jira/browse/LUCENE-5589
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Doing a release currently products *600MB* of artifacts. This is unwieldy...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5976) OverseerTest failing in jenkins

2014-04-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966014#comment-13966014
 ] 

Mark Miller commented on SOLR-5976:
---

Looks like this is a dupe of SOLR-5596.

> OverseerTest failing in jenkins
> ---
>
> Key: SOLR-5976
> URL: https://issues.apache.org/jira/browse/SOLR-5976
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>
> http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1480/
> 1 tests failed.
> REGRESSION:  org.apache.solr.cloud.OverseerTest.testOverseerFailure
> Error Message:
> Could not register as the leader because creating the ephemeral registration 
> node in ZooKeeper failed
> Stack Trace:
> org.apache.solr.common.SolrException: Could not register as the leader 
> because creating the ephemeral registration node in ZooKeeper failed
> at 
> __randomizedtesting.SeedInfo.seed([D5B102D1CE94C29D:D1B98D22DC312DBC]:0)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContextBase.runLeaderProcess(ElectionContext.java:136)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5589) release artifacts are too large.

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965991#comment-13965991
 ] 

Robert Muir commented on LUCENE-5589:
-

I'm not sure that really helps: we already do this in a way in jenkins, with 
nightly-smoke task (but its rarely run, and we discard the artifacts).

And at the end of the day, you still have the network transfer as i cant 
imagine not testing an RC before calling a vote.

So I think we should still be cautious about the size of the artifacts we are 
releasing for a number of reasons. 

> release artifacts are too large.
> 
>
> Key: LUCENE-5589
> URL: https://issues.apache.org/jira/browse/LUCENE-5589
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Doing a release currently products *600MB* of artifacts. This is unwieldy...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965989#comment-13965989
 ] 

Robert Muir commented on LUCENE-5590:
-

Well if its useful for Solr, then keep it there?

But I see no need for the 72MB lucene zip.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1586473 - /lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/ icu/ICUNormalizer2CharFilter.java

2014-04-10 Thread Robert Muir
the functionality is unreleased. So is it really interesting to anyone?

On Thu, Apr 10, 2014 at 5:01 PM, Chris Hostetter
 wrote:
>
> rmuir: shouldn't this have a Jira to track the fix & record it in
> CHANGES.txt ?
>
>
> : Date: Thu, 10 Apr 2014 21:30:53 -
> : From: rm...@apache.org
> : Reply-To: dev@lucene.apache.org
> : To: comm...@lucene.apache.org
> : Subject: svn commit: r1586473 -
> : 
> /lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/
> : icu/ICUNormalizer2CharFilter.java
> :
> : Author: rmuir
> : Date: Thu Apr 10 21:30:53 2014
> : New Revision: 1586473
> :
> : URL: http://svn.apache.org/r1586473
> : Log:
> : fix bug in buffering logic of this charfilter
> :
> : Modified:
> : 
> lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java
> :
> : Modified: 
> lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java
> : URL: 
> http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java?rev=1586473&r1=1586472&r2=1586473&view=diff
> : 
> ==
> : --- 
> lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java
>  (original)
> : +++ 
> lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java
>  Thu Apr 10 21:30:53 2014
> : @@ -104,7 +104,9 @@ public final class ICUNormalizer2CharFil
> :
> :  // if checkedInputBoundary was at the end of a buffer, we need to 
> check that char again
> :  checkedInputBoundary = Math.max(checkedInputBoundary - 1, 0);
> : -if (normalizer.isInert(tmpBuffer[len - 1]) && 
> !Character.isHighSurrogate(tmpBuffer[len-1])) {
> : +// this loop depends on 'isInert' (changes under normalization) but 
> looks only at characters.
> : +// so we treat all surrogates as non-inert for simplicity
> : +if (normalizer.isInert(tmpBuffer[len - 1]) && 
> !Character.isSurrogate(tmpBuffer[len-1])) {
> :return len;
> :  } else return len + readInputToBuffer();
> :}
> :
> :
> :
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1586473 - /lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/ icu/ICUNormalizer2CharFilter.java

2014-04-10 Thread Chris Hostetter

rmuir: shouldn't this have a Jira to track the fix & record it in 
CHANGES.txt ?


: Date: Thu, 10 Apr 2014 21:30:53 -
: From: rm...@apache.org
: Reply-To: dev@lucene.apache.org
: To: comm...@lucene.apache.org
: Subject: svn commit: r1586473 -
: /lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/
: icu/ICUNormalizer2CharFilter.java
: 
: Author: rmuir
: Date: Thu Apr 10 21:30:53 2014
: New Revision: 1586473
: 
: URL: http://svn.apache.org/r1586473
: Log:
: fix bug in buffering logic of this charfilter
: 
: Modified:
: 
lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java
: 
: Modified: 
lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java
: URL: 
http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java?rev=1586473&r1=1586472&r2=1586473&view=diff
: ==
: --- 
lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java
 (original)
: +++ 
lucene/dev/trunk/lucene/analysis/icu/src/java/org/apache/lucene/analysis/icu/ICUNormalizer2CharFilter.java
 Thu Apr 10 21:30:53 2014
: @@ -104,7 +104,9 @@ public final class ICUNormalizer2CharFil
:  
:  // if checkedInputBoundary was at the end of a buffer, we need to check 
that char again
:  checkedInputBoundary = Math.max(checkedInputBoundary - 1, 0);
: -if (normalizer.isInert(tmpBuffer[len - 1]) && 
!Character.isHighSurrogate(tmpBuffer[len-1])) {
: +// this loop depends on 'isInert' (changes under normalization) but 
looks only at characters.
: +// so we treat all surrogates as non-inert for simplicity
: +if (normalizer.isInert(tmpBuffer[len - 1]) && 
!Character.isSurrogate(tmpBuffer[len-1])) {
:return len;
:  } else return len + readInputToBuffer();
:}
: 
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5589) release artifacts are too large.

2014-04-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965960#comment-13965960
 ] 

Hoss Man commented on LUCENE-5589:
--

Robert noted in LUCENE-5590...

bq.  ... Our release process generates 600MB of "stuff". The size just keeps 
getting bigger and bigger, and is already difficult to work with (e.g. upload 
across the internet). ...

While at apachecon the other day, rmuir mentioned his frustration with trying 
to publish a 4.7.2 RC over the slow conference network, which led to this 
parent issue.  

Later, while rmuir wasn't around, sarowe and i were discussing that perhaps the 
root problem isn't so much how big *all* of the release artifacts are combined 
(since most users aren't downloading more then one of the artifacts per 
version) but how to minimize the amount of net transfer involved in preparing & 
smoke testing an RC.  

This lead me to spit ball the following idea for what an "ideal" release 
process might look like by taking more advantage of both jenkins & the svn 
"release" repo (which can be used for RCs as well as the final release 
artifacts)...

{panel}
* someone steps up to be an RM
* if this is a major or minor release:
** the RM creates the new branch in SVN,
** the RM clicks some buttons in jenkins to create a new "prep-release" job on 
the new branch
* a week or so goes by...
** people merge things into the release branch as needed
** the "prep-release" job in jenkins is constantly doing _almost_ everything 
the current buildAndPushRelease.py, with the notable exception of signing the 
artifacts or pushing the data anywhere
** the "prep-release" job will also build up a directory containing all of the 
docs & jdocs we intend to publish on the website for this release once it's 
official
* when the RM is ready, they SSH into the lucene jenkins machine, CD into the 
latest artifacts dir of the "prep-release" job and:
** run some kind of "stage-maven-jars.py" that handles the maven side of 
staging the RC
** svn commits the non-maven artifacts into 
https://dist.apache.org/repos/dist/dev/lucene/...
*** NOTE: this is the dir setup by infra specifically for release candidates
*** this could be fully scripted, or there could be a script similar to the way 
we do ref-guide publishing that moves files around as needed & then echo's out 
hte svn command's to cut/paste for actually committing the files
** run's some sript that stashes away the docs+jdocs for this RC in a more 
permanant local dir so they won't be deleted automatically by jenkins
* on the RM's local machine, he runs some "smoke-test-release.py 
--no-gpg-check" script pointed at the SVN URL of the release candidate
** this smoke-test-release.pl script will do an svn checkout, followed by all 
of the normal things that our existing release smoke checker does
** (the "--no-gpg-check" is because the RM hasn't signed anything yet)
* if the smoke test script passes, then the RM, (on his local machine) runs a 
"sign-releases.py" script on the artifacts & svn commits the newly created 
*.asc files (or maybe the script does that automatically)
* the RM then sends out the email calling a vote on the RC
** everybody else can run the same "smoke-test-release.py" on the svn RC URL, 
but w/o using the "--no-gpg-check" since now the release is signed.
** people cast their votes as normal
* when the vote passes, the RM runs a "publish-rc.py" script, pointing at the 
SVN URL for the RC...
** this script will automatically do whatever magic is needed to promote the 
"staged" maven artifacts into "real" maven artifacts
** this script will then do a remove "svn mv" from the RC's svn URL to the 
final place the release files should live (in 
https://dist.apache.org/repos/dist/releases/lucene/...)
*** or, similar to how we deal with the ref guide: maybe it just echo's out the 
SVN commands for the RM to cut/paste and run manually
* once the dist mirror network has caught up:
** the RM ssh's back to the jenkins machine and cd's to the directory where the 
docs+jdocs for the RC got stashed
** the RM runs some "publish-javadocs.py" script that executes (or echos so the 
RM can cut/paste to manually run...) the needed SSH commands to publish the 
javadocs ono lucene.apache.org
{panel}

The net gains here would be:
* less manual steps for the RM - jenkins does most of the heavy lifting
* the RM never has to "upload" any release artifacts (or big directories of 
javadocs) from their _local_ machine to any remote server - all large transfers 
are:
** jenkins -> dist.apache.org
** dist.apache.org -> dist.apache.org (remote svn mv)
** jenkins -> lucene.apache.org
* the RM only has to "download' the RC like everyone else 
** the only files the RM "uploads" is the signature files as part of an svn 
commit



While i certainly agree that it would be nice to find ways to make the 
_individual_ release artifacts smaller (to facilit

[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965956#comment-13965956
 ] 

Anshum Gupta commented on LUCENE-5590:
--

I think [~upayavira] is referring to training for 'Solr' users. Let's not add a 
step for the section of users who're on Windows and just trying to evaluate/get 
started with Solr.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965943#comment-13965943
 ] 

Robert Muir commented on LUCENE-5590:
-

Training for lucene users who really need a binary dist, not from maven, not 
the src release, and don't what a tgz is?

I do not care about such users, I don't think they exist.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965928#comment-13965928
 ] 

Upayavira commented on LUCENE-5590:
---

I regularly run training courses. I regularly come across delegates who cannot 
handle tarballs. If tgz was the only form available, I would have to rezip it, 
and distribute it as a zip file.

If the issue is the upload time, then we should talk to infra to work out a way 
to get the zipping work of your own laptops. That'd seem to me a better 
resolution.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5111) Fix WordDelimiterFilter

2014-04-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965918#comment-13965918
 ] 

Hoss Man commented on LUCENE-5111:
--

bq. -1 We should really not change the behaviour of analysis components in 
minor releases.

Agreed, -1



> Fix WordDelimiterFilter
> ---
>
> Key: LUCENE-5111
> URL: https://issues.apache.org/jira/browse/LUCENE-5111
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5111.patch, LUCENE-5111.patch
>
>
> WordDelimiterFilter is documented as broken is TestRandomChains 
> (LUCENE-4641). Given how used it is, we should try to fix it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965917#comment-13965917
 ] 

Robert Muir commented on LUCENE-5590:
-

I think its adequately described in the parent issue. Our release process 
generates 600MB of "stuff". The size just keeps getting bigger and bigger, and 
is already difficult to work with (e.g. upload across the internet).

A lot of the size is duplication: the source code is released two ways, once as 
a tar.gz, also as maven. the .class files are released three ways (two binary 
formats, then maven). the javadocs are generate and added to the binary 
distributions (then multiplied twice again, for .tar.gz and .zip), as well as 
being generated for maven. 

It would be nice if the release was a reasonable size. I don't understand how 
zip is helping any user. If a user does not know what to do with a .tgz, i do 
not think I can help them!

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5588.
---

Resolution: Fixed

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
> LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5595) TestICUNormalizer2CharFilter test failure

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5595:
---

 Summary: TestICUNormalizer2CharFilter test failure
 Key: LUCENE-5595
 URL: https://issues.apache.org/jira/browse/LUCENE-5595
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Seems it does the offsets differently with a spoonfed reader.

seed for 4.x:

 ant test  -Dtestcase=TestICUNormalizer2CharFilter 
-Dtests.method=testRandomStrings -Dtests.seed=19423CE8988D3E11 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en 
-Dtests.timezone=America/Bahia_Banderas -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965909#comment-13965909
 ] 

ASF subversion and git services commented on LUCENE-5588:
-

Commit 1586476 from uschind...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1586476 ]

Merged revision(s) 1586475 from lucene/dev/trunk:
LUCENE-5588: Workaround for fsyncing non-existing directory

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
> LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965908#comment-13965908
 ] 

ASF subversion and git services commented on LUCENE-5588:
-

Commit 1586475 from uschind...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1586475 ]

LUCENE-5588: Workaround for fsyncing non-existing directory

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
> LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b10) - Build # 9897 - Failure!

2014-04-10 Thread Robert Muir
this is a different bug, in this case the offsets computation is wrong.
i'll open an issue.

On Sat, Apr 5, 2014 at 6:37 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9897/
> Java: 32bit/jdk1.7.0_60-ea-b10 -server -XX:+UseSerialGC
>
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings
>
> Error Message:
> startOffset 107 expected:<587> but was:<588>
>
> Stack Trace:
> java.lang.AssertionError: startOffset 107 expected:<587> but was:<588>
> at 
> __randomizedtesting.SeedInfo.seed([19423CE8988D3E11:91CB3C563B896924]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:181)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:294)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:298)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:857)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:612)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:435)
> at 
> org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:186)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rul

Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0) - Build # 9899 - Failure!

2014-04-10 Thread Robert Muir
this was the same buffering bug, its fixed.

On Sat, Apr 5, 2014 at 10:15 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9899/
> Java: 32bit/jdk1.8.0 -client -XX:+UseSerialGC
>
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings
>
> Error Message:
> term 450 expected: but was:
>
> Stack Trace:
> org.junit.ComparisonFailure: term 450 expected: but was:
> at 
> __randomizedtesting.SeedInfo.seed([B4100D1094B8E1C3:3C990DAE37BCB6F6]:0)
> at org.junit.Assert.assertEquals(Assert.java:125)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:179)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:294)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:298)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:857)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:612)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:435)
> at 
> org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:202)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsR

[jira] [Reopened] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-5588:
---


There is a problem in Solr: Solr sometimes tries to call FSDirectory.sync on a 
directory that doe snot even exits. This seems to happen when the index is 
empty and NRTCachingDirectory is used. In that case IndexWriter syncs with an 
empty file list.

The fix is to only sync the directory itsself if any file inside it was synced 
before. Otherwise it is not needed to sync at all.

We should fix this behaviour in the future. Maybe the directory should be 
created before so it always exists?

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
> LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965902#comment-13965902
 ] 

Upayavira commented on LUCENE-5590:
---

What is the problem that this is attempting to solve? It seems that providing 
tgz and zip distributions is maximising our ability to help our users.

Unless there is a really good reason to change what we do, then this seems like 
a sure-fire way to annoy one half of our users.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5588:
--

Attachment: LUCENE-5588-nonexistfix.patch

Here a fix for the failures.

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588-nonexistfix.patch, LUCENE-5588.patch, 
> LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Robert Muir
no released version of java7 yet has the fix.

I use update 25 too. i think its a good one to test since its the only
safe version you can currently use.

On Thu, Apr 10, 2014 at 3:30 PM, Dawid Weiss
 wrote:
> But that issue has been fixed (supposedly)?
> D.
>
> On Thu, Apr 10, 2014 at 11:27 PM, Anshum Gupta  wrote:
>> I think the reason why he's still running that version is
>> https://issues.apache.org/jira/browse/LUCENE-5212.
>>
>>
>>
>>
>> On Thu, Apr 10, 2014 at 2:16 PM, Dawid Weiss 
>> wrote:
>>>
>>> [junit4] # JRE version: 7.0_25-b15
>>> [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
>>> mode linux-amd64 compressed oops)
>>>
>>> Simon, is there a chance you could update your JVM? This one is quite
>>> old; if we ran on a newer one we could
>>> ping Oracle to see into the issue.
>>>
>>> Dawid
>>>
>>> On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
>>>  wrote:
>>> >
>>> > FWIW: reproduce line does not reproduce for me.
>>> >
>>> > : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
>>> > : From: buil...@flonkings.com
>>> > : Reply-To: dev@lucene.apache.org
>>> > : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
>>> > : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build #
>>> > 82046 -
>>> > : Failure!
>>> > :
>>> > : Build:
>>> > builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
>>> > :
>>> > : 1 tests failed.
>>> > : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
>>> > :
>>> > : Error Message:
>>> > : java.util.concurrent.ExecutionException:
>>> > java.lang.NullPointerException
>>> > :
>>> > : Stack Trace:
>>> > : java.lang.RuntimeException: java.util.concurrent.ExecutionException:
>>> > java.lang.NullPointerException
>>> > :   at
>>> > __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
>>> > :   at
>>> > org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
>>> > :   at
>>> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
>>> > :   at
>>> > org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
>>> > :   at
>>> > org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
>>> > :   at
>>> > org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
>>> > :   at
>>> > org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
>>> > :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> > :   at
>>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> > :   at
>>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> > :   at java.lang.reflect.Method.invoke(Method.java:606)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
>>> > :   at
>>> > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>>> > :   at
>>> > org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
>>> > :   at
>>> > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>>> > :   at
>>> > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>>> > :   at
>>> > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>>> > :   at
>>> > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
>>> > :   at
>>> > com.carrotsearch.randomizedtesting.RandomizedRunner$5.e

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0) - Build # 10017 - Failure!

2014-04-10 Thread Robert Muir
I committed a fix.

On Sun, Apr 6, 2014 at 9:32 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10017/
> Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC
>
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testNFCHuge
>
> Error Message:
> term 0 expected:<...?睶Qܻǹ䃓̇o򊳝ꀪ߿򊳝𽫖̾䄴泴.[͈򊳝𽫖𝉃]殧򊳝𽫖𝉃𾲖؝㣓�ᇝ䤣7�ľD儹�쪑?...> but 
> was:<...?睶Qܻǹ䃓̇o򊳝𽫖𝉃𾲖򊳝ꀪ߿򊳝𽫖𝉃𾲖򊳝𽫖̾䄴泴.[򊳝𽫖𝉃𾲖򊳝𽫖͈𝉃]殧򊳝𽫖𝉃𾲖򊳝𽫖𝉃𾲖؝㣓�ᇝ䤣7�ľD儹�쪑?...>
>
> Stack Trace:
> org.junit.ComparisonFailure: term 0 
> expected:<...?睶Qܻǹ䃓̇o򊳝ꀪ߿𽫖̾䄴泴.[͈𝉃]殧𾲖؝㣓�ᇝ䤣7�ľD儹�쪑?...> but 
> was:<...?睶Qܻǹ䃓̇o򊳝ꀪ߿𽫖̾䄴泴.[͈𝉃]殧𾲖؝㣓�ᇝ䤣7�ľD儹�쪑?...>
> at 
> __randomizedtesting.SeedInfo.seed([70B714E14A099CA3:7E48F2705D28916E]:0)
> at org.junit.Assert.assertEquals(Assert.java:125)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:179)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:294)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:298)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:302)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:352)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:361)
> at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkOneTerm(BaseTokenStreamTestCase.java:425)
> at 
> org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.doTestMode(TestICUNormalizer2CharFilter.java:130)
> at 
> org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testNFCHuge(TestICUNormalizer2CharFilter.java:139)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOn

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Dawid Weiss
But that issue has been fixed (supposedly)?
D.

On Thu, Apr 10, 2014 at 11:27 PM, Anshum Gupta  wrote:
> I think the reason why he's still running that version is
> https://issues.apache.org/jira/browse/LUCENE-5212.
>
>
>
>
> On Thu, Apr 10, 2014 at 2:16 PM, Dawid Weiss 
> wrote:
>>
>> [junit4] # JRE version: 7.0_25-b15
>> [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
>> mode linux-amd64 compressed oops)
>>
>> Simon, is there a chance you could update your JVM? This one is quite
>> old; if we ran on a newer one we could
>> ping Oracle to see into the issue.
>>
>> Dawid
>>
>> On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
>>  wrote:
>> >
>> > FWIW: reproduce line does not reproduce for me.
>> >
>> > : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
>> > : From: buil...@flonkings.com
>> > : Reply-To: dev@lucene.apache.org
>> > : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
>> > : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build #
>> > 82046 -
>> > : Failure!
>> > :
>> > : Build:
>> > builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
>> > :
>> > : 1 tests failed.
>> > : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
>> > :
>> > : Error Message:
>> > : java.util.concurrent.ExecutionException:
>> > java.lang.NullPointerException
>> > :
>> > : Stack Trace:
>> > : java.lang.RuntimeException: java.util.concurrent.ExecutionException:
>> > java.lang.NullPointerException
>> > :   at
>> > __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
>> > :   at
>> > org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
>> > :   at
>> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
>> > :   at
>> > org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
>> > :   at
>> > org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
>> > :   at
>> > org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
>> > :   at
>> > org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
>> > :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > :   at
>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> > :   at
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > :   at java.lang.reflect.Method.invoke(Method.java:606)
>> > :   at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
>> > :   at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
>> > :   at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>> > :   at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
>> > :   at
>> > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>> > :   at
>> > org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
>> > :   at
>> > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>> > :   at
>> > com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>> > :   at
>> > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>> > :   at
>> > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>> > :   at
>> > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>> > :   at
>> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> > :   at
>> > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
>> > :   at
>> > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
>> > :   at
>> > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
>> > :   at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
>> > :   at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
>> > :   at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
>> > :   at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
>> > :   at
>> > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>> > :   at
>> > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>> > :   at
>> > com.carrotsearch.randomizedtesting.rules

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Anshum Gupta
I think the reason why he's still running that version is
https://issues.apache.org/jira/browse/LUCENE-5212.




On Thu, Apr 10, 2014 at 2:16 PM, Dawid Weiss
wrote:

> [junit4] # JRE version: 7.0_25-b15
> [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
> mode linux-amd64 compressed oops)
>
> Simon, is there a chance you could update your JVM? This one is quite
> old; if we ran on a newer one we could
> ping Oracle to see into the issue.
>
> Dawid
>
> On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
>  wrote:
> >
> > FWIW: reproduce line does not reproduce for me.
> >
> > : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
> > : From: buil...@flonkings.com
> > : Reply-To: dev@lucene.apache.org
> > : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
> > : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build #
> 82046 -
> > : Failure!
> > :
> > : Build:
> builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
> > :
> > : 1 tests failed.
> > : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
> > :
> > : Error Message:
> > : java.util.concurrent.ExecutionException: java.lang.NullPointerException
> > :
> > : Stack Trace:
> > : java.lang.RuntimeException: java.util.concurrent.ExecutionException:
> java.lang.NullPointerException
> > :   at
> __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
> > :   at
> org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
> > :   at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
> > :   at
> org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
> > :   at
> org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
> > :   at
> org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
> > :   at
> org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
> > :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > :   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > :   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > :   at java.lang.reflect.Method.invoke(Method.java:606)
> > :   at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> > :   at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> > :   at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> > :   at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> > :   at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> > :   at
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> > :   at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> > :   at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> > :   at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> > :   at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> > :   at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> > :   at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > :   at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
> > :   at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
> > :   at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
> > :   at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> > :   at
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> > :   at
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> > :   at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> > :   at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> > :   at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> > :   at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> > :   at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> > :   at
> com.carrotsearch.randomizedtesting.rules.N

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Dawid Weiss
[junit4] # JRE version: 7.0_25-b15
[junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed
mode linux-amd64 compressed oops)

Simon, is there a chance you could update your JVM? This one is quite
old; if we ran on a newer one we could
ping Oracle to see into the issue.

Dawid

On Thu, Apr 10, 2014 at 9:42 PM, Chris Hostetter
 wrote:
>
> FWIW: reproduce line does not reproduce for me.
>
> : Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
> : From: buil...@flonkings.com
> : Reply-To: dev@lucene.apache.org
> : To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
> : Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 -
> : Failure!
> :
> : Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
> :
> : 1 tests failed.
> : REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
> :
> : Error Message:
> : java.util.concurrent.ExecutionException: java.lang.NullPointerException
> :
> : Stack Trace:
> : java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.NullPointerException
> :   at 
> __randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
> :   at 
> org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
> :   at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
> :   at 
> org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
> :   at 
> org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
> :   at 
> org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
> :   at 
> org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
> :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> :   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> :   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> :   at java.lang.reflect.Method.invoke(Method.java:606)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> :   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> :   at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> :   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> :   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> :   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> :   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> :   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> :   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> :   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
> :   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
> :   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> :   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> :   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> :   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> :   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> :   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> :   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> :   at 
> com.carrotsearch.randomizedtesting.rules.StatementA

Re: [VOTE] Lucene / Solr 4.7.2 (take two)

2014-04-10 Thread Chris Hostetter

: http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/

+1 to the artifacts with these SHAs...

47ee3825d8c0e0c67f7e17d84c9e8f8a896ccf7b *lucene-4.7.2-src.tgz
d5bee6e4245b8ba4cf7ecf3660b4b71dc4dd8471 *lucene-4.7.2.tgz
5a54e386b0284fc90fd9804979a80913e33a74df *lucene-4.7.2.zip
169470a771a3a5cc7283f77f3ddbb739bf4a0cc6 *solr-4.7.2-src.tgz
5576cf3931beb05baecaad82a5783afb6dc8d490 *solr-4.7.2.tgz
7e7bd18a02be6619190845624c889b1571de3821 *solr-4.7.2.zip



hossman@frisbee:~/tmp/4.7.2$ python3.2 
~/lucene/branch_4_7/dev-tools/scripts/smokeTestRelease.py 
http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/ 
1586229 4.7.2 RC2
...
SUCCESS! [1:03:25.017040]




-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.2 (take two)

2014-04-10 Thread Steve Rowe
+1

SUCCESS! [1:05:26.776253]

On Apr 10, 2014, at 8:51 AM, Robert Muir  wrote:

> artifacts are here:
> 
> http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/
> 
> here is my +1
> SUCCESS! [0:46:25.014499]
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread Chris Hostetter

FWIW: reproduce line does not reproduce for me.

: Date: Thu, 10 Apr 2014 21:05:16 +0200 (CEST)
: From: buil...@flonkings.com
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org, sim...@apache.org, uschind...@apache.org
: Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 -
: Failure!
: 
: Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/
: 
: 1 tests failed.
: REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries
: 
: Error Message:
: java.util.concurrent.ExecutionException: java.lang.NullPointerException
: 
: Stack Trace:
: java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
:   at 
__randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
:   at 
org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
:   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
:   at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
:   at 
org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
:   at 
org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
:   at 
org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:606)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
:   at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
:   at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:   at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
:   at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
:   at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementA

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 82046 - Failure!

2014-04-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/82046/

1 tests failed.
REGRESSION:  org.apache.lucene.search.TestSearchAfter.testQueries

Error Message:
java.util.concurrent.ExecutionException: java.lang.NullPointerException

Stack Trace:
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([A430328F79AAEB71:F8BEFE5463C35EDF]:0)
at 
org.apache.lucene.search.IndexSearcher$ExecutionHelper.next(IndexSearcher.java:836)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:542)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:416)
at 
org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:259)
at 
org.apache.lucene.search.TestSearchAfter.assertQuery(TestSearchAfter.java:205)
at 
org.apache.lucene.search.TestSearchAfter.testQueries(TestSearchAfter.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at java.util.concurrent.FutureTask$Sync.innerGet(F

[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965690#comment-13965690
 ] 

Uwe Schindler commented on SOLR-5948:
-

Hi,
do you want to fix this for 4.8? If yes, please set to blocker, otherwise I 
will soon create the release branch!
Uwe

> Strange jenkins failure: *.si file not found in the middle of cloud test
> 
>
> Key: SOLR-5948
> URL: https://issues.apache.org/jira/browse/SOLR-5948
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-5948.jenkins.log.txt, 
> jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5588.
---

Resolution: Fixed

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965684#comment-13965684
 ] 

ASF subversion and git services commented on LUCENE-5588:
-

Commit 1586410 from uschind...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1586410 ]

Merged revision(s) 1586407 from lucene/dev/trunk:
LUCENE-5588: Lucene now calls fsync() on the index directory, ensuring that all 
file metadata is persisted on disk in case of power failure.

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965681#comment-13965681
 ] 

ASF subversion and git services commented on LUCENE-5588:
-

Commit 1586407 from uschind...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1586407 ]

LUCENE-5588: Lucene now calls fsync() on the index directory, ensuring that all 
file metadata is persisted on disk in case of power failure.

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965663#comment-13965663
 ] 

Michael McCandless commented on SOLR-5948:
--

OK that's good news :)  Cross fingers...

> Strange jenkins failure: *.si file not found in the middle of cloud test
> 
>
> Key: SOLR-5948
> URL: https://issues.apache.org/jira/browse/SOLR-5948
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-5948.jenkins.log.txt, 
> jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965645#comment-13965645
 ] 

Robert Muir commented on LUCENE-5590:
-

{quote}
 The .zip compression is a bit worse ... ~16% larger with the 4.7.1 release.
{quote}

it seems to already be set optimally, i set it to level 9 and got essentially 
the same size binary package.


> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965648#comment-13965648
 ] 

Hoss Man commented on SOLR-5948:


bq. The corruption case for LUCENE-5574 is quite narrow: something (e.g. 
replication) has to copy over index files that replace previous used filenames.

That's certainly possible in these tests -- in both of the attached logs Solr's 
SnapPuller was used by a replica to get caught up with it's leader just prior 
to encountering the FileNotFoundExceptions

> Strange jenkins failure: *.si file not found in the middle of cloud test
> 
>
> Key: SOLR-5948
> URL: https://issues.apache.org/jira/browse/SOLR-5948
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-5948.jenkins.log.txt, 
> jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5594) don't call 'svnversion' over and over in the build

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5594:
---

 Summary: don't call 'svnversion' over and over in the build
 Key: LUCENE-5594
 URL: https://issues.apache.org/jira/browse/LUCENE-5594
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Some ant tasks (at least release packaging, i dunno what else), call svnversion 
over and over and over for each module in the build. can we just do this one 
time instead?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5593) javadocs generation in release tasks: painfully slow

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5593:
---

 Summary: javadocs generation in release tasks: painfully slow
 Key: LUCENE-5593
 URL: https://issues.apache.org/jira/browse/LUCENE-5593
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Something is wrong here in the way this generation works, I see some of the 
same javadocs being generated over and over and over again. 

The current ant tasks seem to have a O(n!) runtime with respect to how many 
modules we have: its obnoxiously slow on a non-beast computer. There is a bug 
here...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965615#comment-13965615
 ] 

Michael McCandless commented on SOLR-5948:
--

The corruption case for LUCENE-5574 is quite narrow: something (e.g. 
replication) has to copy over index files that replace previous used filenames.

Lucene itself never does this (it's write once), but if e.g. these tests can 
overwrite pre-existing filenames then it could explain it.

> Strange jenkins failure: *.si file not found in the middle of cloud test
> 
>
> Key: SOLR-5948
> URL: https://issues.apache.org/jira/browse/SOLR-5948
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-5948.jenkins.log.txt, 
> jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965597#comment-13965597
 ] 

Robert Muir commented on LUCENE-5590:
-

the current artifacts are 600MB in size. This is an easy way to attack this 
problem.

Maybe i shouldnt have described the issue as "removing" something (since its 
not a mandatory part of an apache release), instead as "don't create 
convenience binaries twice".

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5589) release artifacts are too large.

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965564#comment-13965564
 ] 

Shawn Heisey commented on LUCENE-5589:
--

My big concern with the release artifacts is user download time.  The Solr 
binary download is *HUGE* ... whenever I need to download a binary release to 
update my custom projects, I dread doing so when I'm at home where bandwidth is 
limited.

Solr's competition includes ElasticSearch.  Their .zip download is 21.6MB and 
the .tar.gz is even smaller.  Solr's .war file is larger than either, and 
that's just the tip of the iceberg.  There's a lot more 'stuff' in a Solr 
download, but the majority of users don't need that stuff.  Why should they 
download it unless they need it?


> release artifacts are too large.
> 
>
> Key: LUCENE-5589
> URL: https://issues.apache.org/jira/browse/LUCENE-5589
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Doing a release currently products *600MB* of artifacts. This is unwieldy...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965561#comment-13965561
 ] 

Shawn Heisey commented on LUCENE-5590:
--

bq. Maybe we should ship the .zip and not the .tgz?

This might work.  'unzip' is a standard program on every *NIX machine that I 
use regularly.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965558#comment-13965558
 ] 

Shawn Heisey commented on LUCENE-5590:
--

If it came to an official vote, mine would be -0.  I don't oppose this strongly 
enough to block it, I just think it's a bad idea.


> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965548#comment-13965548
 ] 

Michael McCandless commented on LUCENE-5590:


Maybe we should ship the .zip and not the .tgz?  Is .zip more "universal"?  The 
.zip compression is a bit worse ... ~16% larger with the 4.7.1 release.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5948) Strange jenkins failure: *.si file not found in the middle of cloud test

2014-04-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965543#comment-13965543
 ] 

Hoss Man commented on SOLR-5948:


Some speculation these failures may have been caused by LUCENE-5574 ... but i'm 
not sure, i don't fully understand the scope of that bug and if it could have 
lead to a situation where _some_ (but not all) of the index files got deleted 
out from under the reader.

> Strange jenkins failure: *.si file not found in the middle of cloud test
> 
>
> Key: SOLR-5948
> URL: https://issues.apache.org/jira/browse/SOLR-5948
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-5948.jenkins.log.txt, 
> jenkins.Policeman.Lucene-Solr-trunk-MacOSX.1463.log.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965541#comment-13965541
 ] 

Varun Thacker commented on SOLR-5340:
-

I should have been more clear I guess. This was the approach I had planned to 
take -

1. Use this Jira to add the ability for named snapshots/backups. This would be 
at a core level and thus could be used by non SolrCloud users also.
2.  In SOLR-5750 work on providing a seamless backup collection and restore 
collection API. it would utilise the work done on this Jira.


> Add support for named snapshots
> ---
>
> Key: SOLR-5340
> URL: https://issues.apache.org/jira/browse/SOLR-5340
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.5
>Reporter: Mike Schrag
> Attachments: SOLR-5340.patch
>
>
> It would be really nice if Solr supported named snapshots. Right now if you 
> snapshot a SolrCloud cluster, every node potentially records a slightly 
> different timestamp. Correlating those back together to effectively restore 
> the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965536#comment-13965536
 ] 

Noble Paul edited comment on SOLR-5340 at 4/10/14 4:56 PM:
---

does it mean that a user will have to fire a request to all nodes where this 
collection is running? It is complex 
can this be a single collection admin command where you can say specify  a 
collection name + snapshot name and the system can identify the nodes and fire 
separate requests to each node .

I should be able to do the restore also similarly. Working with individual 
nodes should be discouraged as much as possible 


was (Author: noble.paul):
does it mean that a user will have to fire a request to all nodes where this 
collection is running? It is complex 
How can this be a single collection admin command where you can say backup a 
collection with some name and the system can identify the nodes and fire 
separate requests to each node 

> Add support for named snapshots
> ---
>
> Key: SOLR-5340
> URL: https://issues.apache.org/jira/browse/SOLR-5340
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.5
>Reporter: Mike Schrag
> Attachments: SOLR-5340.patch
>
>
> It would be really nice if Solr supported named snapshots. Right now if you 
> snapshot a SolrCloud cluster, every node potentially records a slightly 
> different timestamp. Correlating those back together to effectively restore 
> the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965536#comment-13965536
 ] 

Noble Paul commented on SOLR-5340:
--

does it mean that a user will have to fire a request to all nodes where this 
collection is running? It is complex 
How can this be a single collection admin command where you can say backup a 
collection with some name and the system can identify the nodes and fire 
separate requests to each node 

> Add support for named snapshots
> ---
>
> Key: SOLR-5340
> URL: https://issues.apache.org/jira/browse/SOLR-5340
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.5
>Reporter: Mike Schrag
> Attachments: SOLR-5340.patch
>
>
> It would be really nice if Solr supported named snapshots. Right now if you 
> snapshot a SolrCloud cluster, every node potentially records a slightly 
> different timestamp. Correlating those back together to effectively restore 
> the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965524#comment-13965524
 ] 

Shawn Heisey commented on LUCENE-5590:
--

Someone who is interested in evaluating Solr and is on a Windows machine is 
likely to simply move on to another solution like ElasticSearch if they cannot 
find a .zip download.  Or were you just talking about Lucene itself?

I personally will be OK.  I don't run actual indexes (Solr) on Windows, but I 
download the .zip fairly frequently because my own computer where I do 
development work runs Windows 7.  I know what to do, and I don't have any 
restrictions on what I can install.  There will be people who look at a .tgz 
and have no idea what to do with it, and others who will be unable to install 
the required software.


> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.2 (take two)

2014-04-10 Thread Michael McCandless
+1

SUCCESS! [0:46:33.654703]


Mike McCandless

http://blog.mikemccandless.com


On Thu, Apr 10, 2014 at 10:51 AM, Robert Muir  wrote:
> artifacts are here:
>
> http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/
>
> here is my +1
> SUCCESS! [0:46:25.014499]
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965475#comment-13965475
 ] 

Robert Muir commented on LUCENE-5590:
-

I dont think this argument applies: you already cannot use this software on a 
completely vanilla windows system etc. You must at least install a JVM to do 
anything with it.

> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965464#comment-13965464
 ] 

Shawn Heisey commented on LUCENE-5590:
--

There is no support in a completely vanilla Windows system for extracting a 
tarfile, gzipped or not.  It requires installing additional software, and some 
people work in tightly controlled environments where they cannot install 
anything.  For people who work in that kind of environment, getting a piece of 
software approved is a process that may take months, and if they are caught 
subverting security mechanisms to use an unapproved program, their employment 
could be terminated.


> remove .zip binary artifacts
> 
>
> Key: LUCENE-5590
> URL: https://issues.apache.org/jira/browse/LUCENE-5590
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Robert Muir
>
> It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5592:


Fix Version/s: 5.0
   4.8

> Incorrectly reported uncloseable files.
> ---
>
> Key: LUCENE-5592
> URL: https://issues.apache.org/jira/browse/LUCENE-5592
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Fix For: 4.8, 5.0
>
>
> As pointed out by Uwe, something dodgy is going on with unremovable file 
> detection because they seem to cross a suite boundary, as in.
> {code}
> // trunk
> svn update -r1586300
> cd lucene\core
> ant clean test -Dtests.directory=SimpleFSDirectory
> {code}
> {code}
>[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
> ...
>[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
>[junit4]>  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
>[junit4]>  at 
> org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
>[junit4]>  at java.lang.Thread.run(Thread.java:722)
>[junit4] Completed on J1 in 0.41s, 8 tests, 1 error <<< FAILURES!
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5592.
-

Resolution: Fixed

> Incorrectly reported uncloseable files.
> ---
>
> Key: LUCENE-5592
> URL: https://issues.apache.org/jira/browse/LUCENE-5592
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>
> As pointed out by Uwe, something dodgy is going on with unremovable file 
> detection because they seem to cross a suite boundary, as in.
> {code}
> // trunk
> svn update -r1586300
> cd lucene\core
> ant clean test -Dtests.directory=SimpleFSDirectory
> {code}
> {code}
>[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
> ...
>[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
>[junit4]>  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
>[junit4]>  at 
> org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
>[junit4]>  at java.lang.Thread.run(Thread.java:722)
>[junit4] Completed on J1 in 0.41s, 8 tests, 1 error <<< FAILURES!
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965451#comment-13965451
 ] 

ASF subversion and git services commented on LUCENE-5592:
-

Commit 1586338 from dwe...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1586338 ]

LUCENE-5592: Incorrectly reported uncloseable files.

> Incorrectly reported uncloseable files.
> ---
>
> Key: LUCENE-5592
> URL: https://issues.apache.org/jira/browse/LUCENE-5592
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Fix For: 4.8, 5.0
>
>
> As pointed out by Uwe, something dodgy is going on with unremovable file 
> detection because they seem to cross a suite boundary, as in.
> {code}
> // trunk
> svn update -r1586300
> cd lucene\core
> ant clean test -Dtests.directory=SimpleFSDirectory
> {code}
> {code}
>[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
> ...
>[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
>[junit4]>  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
>[junit4]>  at 
> org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
>[junit4]>  at java.lang.Thread.run(Thread.java:722)
>[junit4] Completed on J1 in 0.41s, 8 tests, 1 error <<< FAILURES!
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965448#comment-13965448
 ] 

ASF subversion and git services commented on LUCENE-5592:
-

Commit 1586337 from dwe...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1586337 ]

LUCENE-5592: Incorrectly reported uncloseable files.

> Incorrectly reported uncloseable files.
> ---
>
> Key: LUCENE-5592
> URL: https://issues.apache.org/jira/browse/LUCENE-5592
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>
> As pointed out by Uwe, something dodgy is going on with unremovable file 
> detection because they seem to cross a suite boundary, as in.
> {code}
> // trunk
> svn update -r1586300
> cd lucene\core
> ant clean test -Dtests.directory=SimpleFSDirectory
> {code}
> {code}
>[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
> ...
>[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
>[junit4]>  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
>[junit4]>  at 
> org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
>[junit4]>  at java.lang.Thread.run(Thread.java:722)
>[junit4] Completed on J1 in 0.41s, 8 tests, 1 error <<< FAILURES!
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965429#comment-13965429
 ] 

Dawid Weiss commented on LUCENE-5592:
-

Oh, it's a silly, silly bug. I'll clean up the code as part of this though.

> Incorrectly reported uncloseable files.
> ---
>
> Key: LUCENE-5592
> URL: https://issues.apache.org/jira/browse/LUCENE-5592
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>
> As pointed out by Uwe, something dodgy is going on with unremovable file 
> detection because they seem to cross a suite boundary, as in.
> {code}
> // trunk
> svn update -r1586300
> cd lucene\core
> ant clean test -Dtests.directory=SimpleFSDirectory
> {code}
> {code}
>[junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
> ...
>[junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
>[junit4]>
> C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
>[junit4]>  at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
>[junit4]>  at 
> org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
>[junit4]>  at java.lang.Thread.run(Thread.java:722)
>[junit4] Completed on J1 in 0.41s, 8 tests, 1 error <<< FAILURES!
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4787) Join Contrib

2014-04-10 Thread Kranti Parisa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965413#comment-13965413
 ] 

Kranti Parisa commented on SOLR-4787:
-

Arul, thanks for posting the findings.

I don't think LONG fields are supported by bjoin.

> Join Contrib
> 
>
> Key: SOLR-4787
> URL: https://issues.apache.org/jira/browse/SOLR-4787
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2.1
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 4.8
>
> Attachments: SOLR-4787-deadlock-fix.patch, 
> SOLR-4787-pjoin-long-keys.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4797-hjoin-multivaluekeys-nestedJoins.patch, 
> SOLR-4797-hjoin-multivaluekeys-trunk.patch
>
>
> This contrib provides a place where different join implementations can be 
> contributed to Solr. This contrib currently includes 3 join implementations. 
> The initial patch was generated from the Solr 4.3 tag. Because of changes in 
> the FieldCache API this patch will only build with Solr 4.2 or above.
> *HashSetJoinQParserPlugin aka hjoin*
> The hjoin provides a join implementation that filters results in one core 
> based on the results of a search in another core. This is similar in 
> functionality to the JoinQParserPlugin but the implementation differs in a 
> couple of important ways.
> The first way is that the hjoin is designed to work with int and long join 
> keys only. So, in order to use hjoin, int or long join keys must be included 
> in both the to and from core.
> The second difference is that the hjoin builds memory structures that are 
> used to quickly connect the join keys. So, the hjoin will need more memory 
> then the JoinQParserPlugin to perform the join.
> The main advantage of the hjoin is that it can scale to join millions of keys 
> between cores and provide sub-second response time. The hjoin should work 
> well with up to two million results from the fromIndex and tens of millions 
> of results from the main query.
> The hjoin supports the following features:
> 1) Both lucene query and PostFilter implementations. A *"cost"* > 99 will 
> turn on the PostFilter. The PostFilter will typically outperform the Lucene 
> query when the main query results have been narrowed down.
> 2) With the lucene query implementation there is an option to build the 
> filter with threads. This can greatly improve the performance of the query if 
> the main query index is very large. The "threads" parameter turns on 
> threading. For example *threads=6* will use 6 threads to build the filter. 
> This will setup a fixed threadpool with six threads to handle all hjoin 
> requests. Once the threadpool is created the hjoin will always use it to 
> build the filter. Threading does not come into play with the PostFilter.
> 3) The *size* local parameter can be used to set the initial size of the 
> hashset used to perform the join. If this is set above the number of results 
> from the fromIndex then the you can avoid hashset resizing which improves 
> performance.
> 4) Nested filter queries. The local parameter "fq" can be used to nest a 
> filter query within the join. The nested fq will filter the results of the 
> join query. This can point to another join to support nested joins.
> 5) Full caching support for the lucene query implementation. The filterCache 
> and queryResultCache should work properly even with deep nesting of joins. 
> Only the queryResultCache comes into play with the PostFilter implementation 
> because PostFilters are not cacheable in the filterCache.
> The syntax of the hjoin is similar to the JoinQParserPlugin except that the 
> plugin is referenced by the string "hjoin" rather then "join".
> fq=\{!hjoin fromIndex=collection2 from=id_i to=id_i threads=6 
> fq=$qq\}user:customer1&qq=group:5
> The example filter query above will search the fromIndex (collection2) for 
> "user:customer1" applying the local fq parameter to filter the results. The 
> lucene filter query will be built using 6 threads. This query will generate a 
> list of values from the "from" field that will be used to filter the main 
> query. Only records from the main query, where the "to" field is present in 
> the "from" list will be included in the results.
> The solrconfig.xml in the main query core must contain the reference to the 
> hjoin.
>  class="org.apache.solr.joins.HashSetJoinQParserPlugin"/>
> And the join contrib lib jars must be registed in the solrconfig.xml.
>  
> After issuing the "ant dist" command from inside the solr directory the joins 
> contrib jar will appear in the sol

[jira] [Updated] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5592:


Description: 
As pointed out by Uwe, something dodgy is going on with unremovable file 
detection because they seem to cross a suite boundary, as in.
{code}
// trunk
svn update -r1586300
cd lucene\core
ant clean test -Dtests.directory=SimpleFSDirectory
{code}

{code}
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
...
   [junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) <<<
   [junit4]> Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
   [junit4]>at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
   [junit4]>at 
org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
   [junit4]>at java.lang.Thread.run(Thread.java:722)
   [junit4] Completed on J1 in 0.41s, 8 tests, 1 error <<< FAILURES!
{code}

  was:
As pointed out by Uwe, something dodgy is going on with unremovable file 
detection because they seem to cross a suite boundary, as in.
{code}
// trunk
svn update -r1586300
cd lucene\core
ant clean test -Dtests.directory=SimpleFSDirectory
{code}

{code}
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
   [junit4]   2> NOTE: test params are: 
codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=88), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=88)), sim=DefaultSimilarity, locale=en_MT, timezone=America/Menominee
   [junit4]   2> NOTE: Windows 7 6.1 amd64/Oracle Corporation 1.7.0_03 
(64-bit)/cpus=8,threads=1,free=209293024,total=342491136
   [junit4]   2> NOTE: All tests run in this JVM: [TestNoMergePolicy, 
TestPriorityQueue, TestBagOfPositions, TestSpans, TestNRTThreads, 
TestIndexWriterExceptions, TestSimpleAttributeImpl, TestAtomicUpdate, 
TestStressAdvance, Nested1, TestCharsRef, TestBlockPostingsFormat3, 
TestMultiFields, TestDocumentWriter, TestTwoPhaseCommitTool, 
TestCompiledAutomaton, TestNRTReaderWithThreads, TestTransactionRollback, 
TestSearchAfter, TestTermVectorsFormat, TestParallelCompositeReader, 
TestTermVectorsWriter, TestNearSpansOrdered, TestFilterAtomicReader, 
TestMultiTermQueryRewrites, TestLongPostings, TestThreadedForceMerge, TestLock, 
Nested, TestPrefixFilter, TestTermRangeQuery, TestFieldCache, 
TestRecyclingByteBlockAllocator, TestTerm, Test2BPositions, TestArrayUtil, 
Nested1, TestSpanSearchEquivalence]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestSpanSearchEquivalence -Dtests.seed=8886562EBCD30121 
-Dtests.slow=true -Dtests.directory=SimpleFSDirectory -Dtests.locale=en_MT 
-Dtests.timezone=America/Menominee -Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) <<<
   [junit4]> Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDire

[jira] [Created] (LUCENE-5592) Incorrectly reported uncloseable files.

2014-04-10 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-5592:
---

 Summary: Incorrectly reported uncloseable files.
 Key: LUCENE-5592
 URL: https://issues.apache.org/jira/browse/LUCENE-5592
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss


As pointed out by Uwe, something dodgy is going on with unremovable file 
detection because they seem to cross a suite boundary, as in.
{code}
// trunk
svn update -r1586300
cd lucene\core
ant clean test -Dtests.directory=SimpleFSDirectory
{code}

{code}
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
   [junit4]   2> NOTE: test params are: 
codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=88), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=88)), sim=DefaultSimilarity, locale=en_MT, timezone=America/Menominee
   [junit4]   2> NOTE: Windows 7 6.1 amd64/Oracle Corporation 1.7.0_03 
(64-bit)/cpus=8,threads=1,free=209293024,total=342491136
   [junit4]   2> NOTE: All tests run in this JVM: [TestNoMergePolicy, 
TestPriorityQueue, TestBagOfPositions, TestSpans, TestNRTThreads, 
TestIndexWriterExceptions, TestSimpleAttributeImpl, TestAtomicUpdate, 
TestStressAdvance, Nested1, TestCharsRef, TestBlockPostingsFormat3, 
TestMultiFields, TestDocumentWriter, TestTwoPhaseCommitTool, 
TestCompiledAutomaton, TestNRTReaderWithThreads, TestTransactionRollback, 
TestSearchAfter, TestTermVectorsFormat, TestParallelCompositeReader, 
TestTermVectorsWriter, TestNearSpansOrdered, TestFilterAtomicReader, 
TestMultiTermQueryRewrites, TestLongPostings, TestThreadedForceMerge, TestLock, 
Nested, TestPrefixFilter, TestTermRangeQuery, TestFieldCache, 
TestRecyclingByteBlockAllocator, TestTerm, Test2BPositions, TestArrayUtil, 
Nested1, TestSpanSearchEquivalence]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestSpanSearchEquivalence -Dtests.seed=8886562EBCD30121 
-Dtests.slow=true -Dtests.directory=SimpleFSDirectory -Dtests.locale=en_MT 
-Dtests.timezone=America/Menominee -Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.00s J1 | TestSpanSearchEquivalence (suite) <<<
   [junit4]> Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0.fdt
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.doc
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001\_0_Lucene41_0.tim
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001\index-SimpleFSDirectory-001
   [junit4]>
C:\Work\lucene-solr-svn\trunk\lucene\build\core\test\J1\.\lucene.util.junitcompat.TestFailOnFieldCacheInsanity$Nested1-8886562EBCD30121-001
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([8886562EBCD30121]:0)
   [junit4]>at org.apache.lucene.util.TestUtil.rm(TestUtil.java:118)
   [junit4]>at 
org.apache.lucene.util.LuceneTestCase$TemporaryFilesCleanupRule.afterAlways(LuceneTestCase.java:2358)
   [junit4]>at java.lang.Thread.run(Thread.java:722)
   [junit4] Completed on J1 in 0.41s, 8 tests, 1 error <<< FAILURES!
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Lucene / Solr 4.7.2 (take two)

2014-04-10 Thread Robert Muir
artifacts are here:

http://people.apache.org/~rmuir/staging_area/lucene_solr_4_7_2_r1586229/

here is my +1
SUCCESS! [0:46:25.014499]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-04-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5473:


Attachment: SOLR-5473-74.patch

Thanks Noble.

There were a few conflicts in CollectionsAPIDistributedZKTest which I have 
fixed in this patch. I also introduced a system property 
"tests.solr.stateFormat" which sets the stateFormat to be used for the default 
collection. If this property is not set then the state format is chosen 
randomly.

> Make one state.json per collection
> --
>
> Key: SOLR-5473
> URL: https://issues.apache.org/jira/browse/SOLR-5473
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, ec2-23-20-119-52_solr.log, 
> ec2-50-16-38-73_solr.log
>
>
> As defined in the parent issue, store the states of each collection under 
> /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5591) ReaderAndUpdates should create a proper IOContext when writing DV updates

2014-04-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965405#comment-13965405
 ] 

Shai Erera commented on LUCENE-5591:


I will. I think over estimating is better than under estimating in that case, 
since worse case the files will be flushed to disk, rather than app hits OOM.

> ReaderAndUpdates should create a proper IOContext when writing DV updates
> -
>
> Key: LUCENE-5591
> URL: https://issues.apache.org/jira/browse/LUCENE-5591
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>
> Today we pass IOContext.DEFAULT. If DV updates are used in conjunction w/ 
> NRTCachingDirectory, it means the latter will attempt to write the entire DV 
> field in its RAMDirectory, which could lead to OOM.
> Would be good if we can build our own FlushInfo, estimating the number of 
> bytes we're about to write. I didn't see off hand a quick way to guesstimate 
> that - I thought to use the current DV's sizeInBytes as an approximation, but 
> I don't see a way to get it, not a direct way at least.
> Maybe we can use the size of the in-memory updates to guesstimate that 
> amount? Something like {{sizeOfInMemUpdates * (maxDoc/numUpdatedDocs)}}? Is 
> it a too wild guess?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5591) ReaderAndUpdates should create a proper IOContext when writing DV updates

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965403#comment-13965403
 ] 

Michael McCandless commented on LUCENE-5591:


+1, good catch.

I think that guesstimate is a good start?  It likely wildly over-estimates 
though, since in-RAM structures are usually much more costly than the on-disk 
format, maybe try it out and see how much it over-estimates?

> ReaderAndUpdates should create a proper IOContext when writing DV updates
> -
>
> Key: LUCENE-5591
> URL: https://issues.apache.org/jira/browse/LUCENE-5591
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>
> Today we pass IOContext.DEFAULT. If DV updates are used in conjunction w/ 
> NRTCachingDirectory, it means the latter will attempt to write the entire DV 
> field in its RAMDirectory, which could lead to OOM.
> Would be good if we can build our own FlushInfo, estimating the number of 
> bytes we're about to write. I didn't see off hand a quick way to guesstimate 
> that - I thought to use the current DV's sizeInBytes as an approximation, but 
> I don't see a way to get it, not a direct way at least.
> Maybe we can use the size of the in-memory updates to guesstimate that 
> amount? Something like {{sizeOfInMemUpdates * (maxDoc/numUpdatedDocs)}}? Is 
> it a too wild guess?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5591) ReaderAndUpdates should create a proper IOContext when writing DV updates

2014-04-10 Thread Shai Erera (JIRA)
Shai Erera created LUCENE-5591:
--

 Summary: ReaderAndUpdates should create a proper IOContext when 
writing DV updates
 Key: LUCENE-5591
 URL: https://issues.apache.org/jira/browse/LUCENE-5591
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shai Erera


Today we pass IOContext.DEFAULT. If DV updates are used in conjunction w/ 
NRTCachingDirectory, it means the latter will attempt to write the entire DV 
field in its RAMDirectory, which could lead to OOM.

Would be good if we can build our own FlushInfo, estimating the number of bytes 
we're about to write. I didn't see off hand a quick way to guesstimate that - I 
thought to use the current DV's sizeInBytes as an approximation, but I don't 
see a way to get it, not a direct way at least.

Maybe we can use the size of the in-memory updates to guesstimate that amount? 
Something like {{sizeOfInMemUpdates * (maxDoc/numUpdatedDocs)}}? Is it a too 
wild guess?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-04-10 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-5963:
-

Affects Version/s: (was: 4.8)
   4.9

> Finalize interface and backport analytics component to 4x
> -
>
> Key: SOLR-5963
> URL: https://issues.apache.org/jira/browse/SOLR-5963
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.9, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5963.patch
>
>
> Now that we seem to have fixed up the test failures for trunk for the 
> analytics component, we need to solidify the API and back-port it to 4x. For 
> history, see SOLR-5302 and SOLR-5488.
> As far as I know, these are the merges that need to occur to do this (plus 
> any that this JIRA brings up)
> svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
> The only remaining thing I think needs to be done is to solidify the 
> interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
> although SOLR-5488 is the most relevant one.
> [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
> interested here.
> I really want to put this to bed, so if we can get agreement on this soon I 
> can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-04-10 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965395#comment-13965395
 ] 

Erick Erickson commented on SOLR-5963:
--

Unless there are objections, I plan on committing this to 4.9 after the 4.8 
branch happens so we have maximum time to let it bake in 4x before releasing it 
into the wild.

> Finalize interface and backport analytics component to 4x
> -
>
> Key: SOLR-5963
> URL: https://issues.apache.org/jira/browse/SOLR-5963
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.9, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5963.patch
>
>
> Now that we seem to have fixed up the test failures for trunk for the 
> analytics component, we need to solidify the API and back-port it to 4x. For 
> history, see SOLR-5302 and SOLR-5488.
> As far as I know, these are the merges that need to occur to do this (plus 
> any that this JIRA brings up)
> svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
> The only remaining thing I think needs to be done is to solidify the 
> interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
> although SOLR-5488 is the most relevant one.
> [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
> interested here.
> I really want to put this to bed, so if we can get agreement on this soon I 
> can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-04-10 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-5963:
-

Affects Version/s: 5.0
  Summary: Finalize interface and backport analytics component to 
4x  (was: backport analytics component to 4x)

> Finalize interface and backport analytics component to 4x
> -
>
> Key: SOLR-5963
> URL: https://issues.apache.org/jira/browse/SOLR-5963
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.8, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5963.patch
>
>
> Now that we seem to have fixed up the test failures for trunk for the 
> analytics component, we need to solidify the API and back-port it to 4x. For 
> history, see SOLR-5302 and SOLR-5488.
> As far as I know, these are the merges that need to occur to do this (plus 
> any that this JIRA brings up)
> svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
> svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
> The only remaining thing I think needs to be done is to solidify the 
> interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
> although SOLR-5488 is the most relevant one.
> [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
> interested here.
> I really want to put this to bed, so if we can get agreement on this soon I 
> can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reassigned LUCENE-5588:
-

Assignee: Uwe Schindler

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-04-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965373#comment-13965373
 ] 

Noble Paul edited comment on SOLR-5473 at 4/10/14 2:04 PM:
---

All tests now randomly use external (even collection1)

few  bug fixes

When , a single state command does multiple updates to the same collection  (as 
in split ) the first update was overwritten

When client has a collection state that is newer than server , STALE state was 
thrown. Server now updates its state

I have done all the planned testing for this , I plan to commit this to trunk 
(4x branch later) soon if there are no more concerns

ZkStateReader.updateClusterState() was not updating external collections


was (Author: noble.paul):
All tests now randomly use external (even collection1)

few  bug fixes

When , a single state command does multiple updates to the same collection  (as 
in split ) the first update was overwritten

When client has a collection state that is newer than server , STALE state was 
thrown. Server now updates its state

ZkStateReader.updateClusterState() was not updating external collections

> Make one state.json per collection
> --
>
> Key: SOLR-5473
> URL: https://issues.apache.org/jira/browse/SOLR-5473
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log
>
>
> As defined in the parent issue, store the states of each collection under 
> /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-04-10 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Attachment: SOLR-5473-74.patch

All tests now randomly use external (even collection1)

few  bug fixes

When , a single state command does multiple updates to the same collection  (as 
in split ) the first update was overwritten

When client has a collection state that is newer than server , STALE state was 
thrown. Server now updates its state

ZkStateReader.updateClusterState() was not updating external collections

> Make one state.json per collection
> --
>
> Key: SOLR-5473
> URL: https://issues.apache.org/jira/browse/SOLR-5473
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log
>
>
> As defined in the parent issue, store the states of each collection under 
> /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5590) remove .zip binary artifacts

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5590:
---

 Summary: remove .zip binary artifacts
 Key: LUCENE-5590
 URL: https://issues.apache.org/jira/browse/LUCENE-5590
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir


It is enough to release this as .tgz



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5589) release artifacts are too large.

2014-04-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5589:
---

 Summary: release artifacts are too large.
 Key: LUCENE-5589
 URL: https://issues.apache.org/jira/browse/LUCENE-5589
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Doing a release currently products *600MB* of artifacts. This is unwieldy...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965354#comment-13965354
 ] 

Michael McCandless commented on LUCENE-5588:


+1, looks great!  Thanks Uwe.

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5588:
--

Attachment: LUCENE-5588.patch

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5588:
--

Attachment: (was: LUCENE-5588.patch)

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5584) Allow FST read method to also recycle the output value when traversing FST

2014-04-10 Thread Christian Ziech (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965316#comment-13965316
 ] 

Christian Ziech commented on LUCENE-5584:
-

Trying to assemble the patch I came across the FST.Arc.copyFrom(Arc) method 
which unfortunately seems to implicitly assumes that the output of a node is 
immutable (which it would not be any longer). Is this immutability intended? If 
not I think that copyFrom() method would need to be moved into the FST class so 
that it can make use of the Outputs of the FST to clone the output of the 
copied arc if it is mutable ... however that would increase the size of the 
patch and possibly impact other users too ...

> Allow FST read method to also recycle the output value when traversing FST
> --
>
> Key: LUCENE-5584
> URL: https://issues.apache.org/jira/browse/LUCENE-5584
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/FSTs
>Affects Versions: 4.7.1
>Reporter: Christian Ziech
>
> The FST class heavily reuses Arc instances when traversing the FST. The 
> output of an Arc however is not reused. This can especially be important when 
> traversing large portions of a FST and using the ByteSequenceOutputs and 
> CharSequenceOutputs. Those classes create a new byte[] or char[] for every 
> node read (which has an output).
> In our use case we intersect a lucene Automaton with a FST much 
> like it is done in 
> org.apache.lucene.search.suggest.analyzing.FSTUtil.intersectPrefixPaths() and 
> since the Automaton and the FST are both rather large tens or even hundreds 
> of thousands of temporary byte array objects are created.
> One possible solution to the problem would be to change the 
> org.apache.lucene.util.fst.Outputs class to have two additional methods (if 
> you don't want to change the existing methods for compatibility):
> {code}
>   /** Decode an output value previously written with {@link
>*  #write(Object, DataOutput)} reusing the object passed in if possible */
>   public abstract T read(DataInput in, T reuse) throws IOException;
>   /** Decode an output value previously written with {@link
>*  #writeFinalOutput(Object, DataOutput)}.  By default this
>*  just calls {@link #read(DataInput)}. This tries to  reuse the object   
>*  passed in if possible */
>   public T readFinalOutput(DataInput in, T reuse) throws IOException {
> return read(in, reuse);
>   }
> {code}
> The new methods could then be used in the FST in the readNextRealArc() method 
> passing in the output of the reused Arc. For most inputs they could even just 
> invoke the original read(in) method.
> If you should decide to make that change I'd be happy to supply a patch 
> and/or tests for the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965312#comment-13965312
 ] 

Simon Willnauer commented on LUCENE-3237:
-

{quote} Ref counting may be overkill? Who else will be pulling/sharing this
sync handle? Maybe we can add a "IndexOutput.closeToSyncHandle", the
IndexOutput flushes and is unusable from then on, but returns the sync
handle which the caller must later close.{quote}

good!

{quote}

One downside of moving to this API is ... it rules out writing some
bytes, fsyncing, writing some more, fsyncing, e.g. if we wanted to add
a transaction log impl on top of Lucene. But I think that's OK
(design for today). There are other limitations in IndexOuput for
xlog impl...

{quote}

I don't see what keeps us from adding a sync method to IndexOutput that allows 
us to bytes, fsyncing, writing some more, fsyncing. I think we should make this 
change nevertheless. This can go in today I independent from where we use it.

bq. Yeah we can pursue this in "phase 2". 
agreed

> FSDirectory.fsync() may not work properly
> -
>
> Key: LUCENE-3237
> URL: https://issues.apache.org/jira/browse/LUCENE-3237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Reporter: Shai Erera
> Attachments: LUCENE-3237.patch
>
>
> Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
> FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
> written to the file by other FileDescriptors. It would be better if we do 
> this operation on the actual RAF/FileOS which wrote the data. We can add 
> sync() to IndexOutput and FSIndexOutput will do that.
> Directory-wise, we should stop syncing on file names, and instead sync on the 
> IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5340) Add support for named snapshots

2014-04-10 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-5340:


Attachment: SOLR-5340.patch

- Added the ability to create a named snapshot. 
Example - /replication?command=backup&name=testbackup
- For named snapshots "maxNumberOfBackups" and "numberToKeep" are ignored.
- Explicitly delete named snapshots
Example - /replication?command=deletebackup&name=testbackup

> Add support for named snapshots
> ---
>
> Key: SOLR-5340
> URL: https://issues.apache.org/jira/browse/SOLR-5340
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.5
>Reporter: Mike Schrag
> Attachments: SOLR-5340.patch
>
>
> It would be really nice if Solr supported named snapshots. Right now if you 
> snapshot a SolrCloud cluster, every node potentially records a slightly 
> different timestamp. Correlating those back together to effectively restore 
> the entire cluster to a consistent snapshot is pretty tedious.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5588:
--

Attachment: LUCENE-5588.patch

I cleaned up the patch:
- Reversed the loop (FileChannel is opened one time outside the loop and then 
fsync is tried 5 times). This makes the extra check for windows obsolete. This 
also goes in line what [~mikemccand] plans on LUCENE-3237 (repeating only the 
fsync on an already open IndexOutput.
- Tested MacOSX -> works and added assert.

Uwe

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5584) Allow FST read method to also recycle the output value when traversing FST

2014-04-10 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965282#comment-13965282
 ] 

Karl Wright commented on LUCENE-5584:
-

Hi Christian,

I think at this point, posting a proposed diff would be the best thing to do, 
with also maybe a snippet of code demonstrating our particular use case.


> Allow FST read method to also recycle the output value when traversing FST
> --
>
> Key: LUCENE-5584
> URL: https://issues.apache.org/jira/browse/LUCENE-5584
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/FSTs
>Affects Versions: 4.7.1
>Reporter: Christian Ziech
>
> The FST class heavily reuses Arc instances when traversing the FST. The 
> output of an Arc however is not reused. This can especially be important when 
> traversing large portions of a FST and using the ByteSequenceOutputs and 
> CharSequenceOutputs. Those classes create a new byte[] or char[] for every 
> node read (which has an output).
> In our use case we intersect a lucene Automaton with a FST much 
> like it is done in 
> org.apache.lucene.search.suggest.analyzing.FSTUtil.intersectPrefixPaths() and 
> since the Automaton and the FST are both rather large tens or even hundreds 
> of thousands of temporary byte array objects are created.
> One possible solution to the problem would be to change the 
> org.apache.lucene.util.fst.Outputs class to have two additional methods (if 
> you don't want to change the existing methods for compatibility):
> {code}
>   /** Decode an output value previously written with {@link
>*  #write(Object, DataOutput)} reusing the object passed in if possible */
>   public abstract T read(DataInput in, T reuse) throws IOException;
>   /** Decode an output value previously written with {@link
>*  #writeFinalOutput(Object, DataOutput)}.  By default this
>*  just calls {@link #read(DataInput)}. This tries to  reuse the object   
>*  passed in if possible */
>   public T readFinalOutput(DataInput in, T reuse) throws IOException {
> return read(in, reuse);
>   }
> {code}
> The new methods could then be used in the FST in the readNextRealArc() method 
> passing in the output of the reused Arc. For most inputs they could even just 
> invoke the original read(in) method.
> If you should decide to make that change I'd be happy to supply a patch 
> and/or tests for the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5932) DIH: retry query on "terminating connection due to conflict with recovery"

2014-04-10 Thread Gunnlaugur Thor Briem (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965278#comment-13965278
 ] 

Gunnlaugur Thor Briem commented on SOLR-5932:
-

One way to do this generically: define a set of exception predicates, e.g. 
{{retry_on_errors}} or something like that, which could be set or augmented in 
configuration. Each might be as simple as a regular expression to be matched 
against the exception message. When an exception is caught, the predicates are 
iterated and each applied to the exception. If a predicate evaluates as true 
(there's a match), then the exception is identified as one for which a retry is 
appropriate, and the import operation continues (unless the error repeats; 
maybe N retries with an exponential backoff, in case of a DB restart or 
momentary network hiccup or such).

That's reasonably general, i.e. not specific to any one DB engine, and can be 
extended by users by adding a regexp in an appropriate spot in 
{{db-data-config.xml}}

> DIH: retry query on "terminating connection due to conflict with recovery"
> --
>
> Key: SOLR-5932
> URL: https://issues.apache.org/jira/browse/SOLR-5932
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 4.7
>Reporter: Gunnlaugur Thor Briem
>Priority: Minor
>
> When running DIH against a hot-standby PostgreSQL database, one may randomly 
> see queries fail with this error:
> {code}
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: FATAL: terminating connection due to 
> conflict with recovery
>   Detail: User query might have needed to see row versions that must be 
> removed.
>   Hint: In a moment you should be able to reconnect to the database and 
> repeat your command.
> {code}
> A reasonable course of action in this case is to catch the error and retry. 
> This would support the use case of doing an initial (possibly lengthy) clean 
> full-import against a hot-standby server, and then just running incremental 
> dataimports against the master.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 2159 - Failure!

2014-04-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/2159/

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at 
__randomizedtesting.SeedInfo.seed([982687357BB46D83:F27D382422FA4D70]:0)
at java.lang.Integer.valueOf(Integer.java:642)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:125)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:702)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:613)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:920)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)




Build Log:
[...truncated 1049 lines...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=false text='({ unfx mcmgcb iy mwmzgbi 
S%U\u8d27\udabd\udd3f\u4efa@4\u0324\ue58e\u3df6\u26d61 \ufe59 ngtnemdl a 
jpoqwniv \u0f37\uda78\udccb\u0413\uef41 \u0288\u0276\u027f\u025a\u02a0 
\u19c31@\uf4cdG wdfvjeue \uf183\uf5ee\udab5\ude6c\u02e8\uda18\ude88\uaa09\u02f9 
\ud7e8\ud7ff\ud7d6\ud7d8\ud7bd\ud7da\ud7dd ulttugi 
\u017a\ud9f3\ude4e\u05a7(\u07c7{\uffa3 '
   [junit4]   2> Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2> tokenizer=
   [junit4]   2>   org.apache.lucene.analysis.ngram.NGramTokenizer(LUCENE_50, 
org.apache.lucene.util.AttributeSource$AttributeFactory$DefaultAttributeFactory@4678ccfc,
 37, 91)
   [junit4]   2> filters=
   [junit4]   2>   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingTokenFilter@48a7406c 
term=,bytes=[],positionIncrement=1,positionLength=1,startOffset=0,endOffset=0,type=word,
 45)
   [junit4]   2>   
org.apache.lucene.analysis.de.GermanStemFilter(ValidatingTokenFilter@3927dd17 
term=,bytes=[],positionIncrement=1,positionLength=1,startOffset=0,endOffset=0,type=word,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingT

[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 2157 - Failure!

2014-04-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/2157/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/trunk
org.tmatesoft.svn.core.SVNException: svn: E175002: PROPFIND 
/repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.findStartingProperties(DAVUtil.java:136)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.fetchRepositoryUUID(DAVConnection.java:120)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:150)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:910)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:891)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:874)
at hudson.FilePath.act(FilePath.java:914)
at hudson.FilePath.act(FilePath.java:887)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:850)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:788)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1320)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:518)
at hudson.model.Run.execute(Run.java:1689)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: svn: E175002: PROPFIND /repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.doPropfind(DAVConnection.java:131)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getProperties(DAVUtil.java:73)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getResourceProperties(DAVUtil.java:79)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getStartingProperties(DAVUtil.java:103)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.findStartingProperties(DAVUtil.java:125)
... 32 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E175002: PROPFIND request 
failed on '/repos/asf/lucene/dev/trunk'
svn: E175002: Connection reset
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnec

[jira] [Commented] (LUCENE-5584) Allow FST read method to also recycle the output value when traversing FST

2014-04-10 Thread Christian Ziech (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965215#comment-13965215
 ] 

Christian Ziech commented on LUCENE-5584:
-

Thx for the very quick and helpful replies. It seems that I owe you some more 
hard and concrete information on our use case, what we exactly do and our 
environment.
About the environment - the tests were run with
{quote}
java version "1.7.0_45"
OpenJDK Runtime Environment (rhel-2.4.3.4.el6_5-x86_64 u45-b15)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)
{quote}
on a CentOS 6.5. Our vm options don't enable the tlab right now but I'm 
definitely consider using it for other reasons. Currently we are running with 
the following (gc relevant) arguments: -Xmx6g -XX:MaxNewSize=700m 
-XX:+UseConcMarkSweepGC -XX:MaxDirectMemorySize=35g. 

I'm not so much worried about the get performance although that could be 
improved as well. We are using lucenes LevenshteinAutomata class to generate a 
couple of Levenshtein automatons with edit distance 1 or 2 (one for each search 
term), build the union of them and intersect them with our FST using a modified 
version of the method 
org.apache.lucene.search.suggest.analyzing.FSTUtil.intersectPrefixPaths() which 
uses a callback method to push every matched entry instead of returning the 
whole list of paths (for efficiency reasons as well: we don't actually need the 
byte arrays but we want to parse them into a value object, hence reusing the 
output byte array is ok for us).
Our FST has about 500M entires and each entry has a value of approx. 10-20 
bytes. That produces for a random query with 4 terms (and hence a union of 4 
levenshtein automatons) an amount of ~2M visited nodes with output (hence 2M 
created temporary byte []) and a total size ~7.5M for the temporary byte arrays 
(+ the overhead per instance). In that experiment I matched about 10k terms in 
the FST. Those numbers are taking into account that we already used our own add 
implementation that writes to always the same BytesRef instance when adding 
outputs.
The overall impact on the GC and also the execution speed of the method was 
rather significant in total - I can try to dig up numbers for that but they 
would be rather application specific.

Does this help and answers all the questions so far?

Btw: Experimenting a little with the change I noticed that things may be a 
slightly more complicated since the output of a node is often overwritten with 
"NO_OUTPUT" from the Outputs - so that method would need to recycle the current 
output as well if possible but that may have interesting side effects - but 
hopefully that should be manageable.

> Allow FST read method to also recycle the output value when traversing FST
> --
>
> Key: LUCENE-5584
> URL: https://issues.apache.org/jira/browse/LUCENE-5584
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/FSTs
>Affects Versions: 4.7.1
>Reporter: Christian Ziech
>
> The FST class heavily reuses Arc instances when traversing the FST. The 
> output of an Arc however is not reused. This can especially be important when 
> traversing large portions of a FST and using the ByteSequenceOutputs and 
> CharSequenceOutputs. Those classes create a new byte[] or char[] for every 
> node read (which has an output).
> In our use case we intersect a lucene Automaton with a FST much 
> like it is done in 
> org.apache.lucene.search.suggest.analyzing.FSTUtil.intersectPrefixPaths() and 
> since the Automaton and the FST are both rather large tens or even hundreds 
> of thousands of temporary byte array objects are created.
> One possible solution to the problem would be to change the 
> org.apache.lucene.util.fst.Outputs class to have two additional methods (if 
> you don't want to change the existing methods for compatibility):
> {code}
>   /** Decode an output value previously written with {@link
>*  #write(Object, DataOutput)} reusing the object passed in if possible */
>   public abstract T read(DataInput in, T reuse) throws IOException;
>   /** Decode an output value previously written with {@link
>*  #writeFinalOutput(Object, DataOutput)}.  By default this
>*  just calls {@link #read(DataInput)}. This tries to  reuse the object   
>*  passed in if possible */
>   public T readFinalOutput(DataInput in, T reuse) throws IOException {
> return read(in, reuse);
>   }
> {code}
> The new methods could then be used in the FST in the readNextRealArc() method 
> passing in the output of the reused Arc. For most inputs they could even just 
> invoke the original read(in) method.
> If you should decide to make that change I'd be happy to supply a patch 
> and/or tests for the feature.



--
This message was sent by Atlassian JIRA
(v6.2#

[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965214#comment-13965214
 ] 

Michael McCandless commented on LUCENE-3237:


Thanks Simon.

bq. Hey mike, thanks for reopening this. 

I actually didn't reopen yet ... because I do think this really is
paranoia.  The OS man pages make the semantics clear, and what we are
doing today (reopen the file for syncing) is correct.

bq. I like the fact that we get rid of the general unsynced files stuff in 
Directory.
bq. given the last point we move it in the right place inside IW that is where 
it should be

Yeah I really like that.

But, we could do that separately, i.e. add private tracking inside IW
of which newly written file names haven't been sync'd.

bq. the problem that the current patch has is that is holds on to the buffers 
in BufferedIndexOutput. I think we need to work around this here are a couple 
of ideas:
bq. introduce a SyncHandle class that we can pull from IndexOutput that allows 
to close the IndexOutput but lets you fsync after the fact

I think that's a good idea.  For FSDir impls this is just a thin
wrapper around FileDescriptor.

bq. this handle can be refcounted internally and we just decrement the count on 
IndexOutput#close() as well as on SyncHandle#close()
bq. we can just hold on to the SyncHandle until we need to sync in IW

Ref counting may be overkill?  Who else will be pulling/sharing this
sync handle?  Maybe we can add a "IndexOutput.closeToSyncHandle", the
IndexOutput flushes and is unusable from then on, but returns the sync
handle which the caller must later close.

One downside of moving to this API is ... it rules out writing some
bytes, fsyncing, writing some more, fsyncing, e.g. if we wanted to add
a transaction log impl on top of Lucene.  But I think that's OK
(design for today).  There are other limitations in IndexOuput for
xlog impl...

bq.since this will basically close the underlying FD later we might want to 
think about size-bounding the number of unsynced files and maybe let indexing 
threads fsync them concurrently? maybe something we can do later.
bq.if we know we flush for commit we can already fsync directly which might 
safe resources / time since it might be concurrent

Yeah we can pursue this in "phase 2".  The OS will generally move
dirty buffers to stable storage anyway over time, so the cost of
fsyncing files written (relatively) long ago (10s of seconds; on linux
I think the default is usually 30 seconds) will usually be low.  The
problem is on some filesystems fsync can be unexpectedly costly (there
was a "famous" case in ext3
https://bugzilla.mozilla.org/show_bug.cgi?id=421482 but this has been
fixed), so we need to be careful about this.


> FSDirectory.fsync() may not work properly
> -
>
> Key: LUCENE-3237
> URL: https://issues.apache.org/jira/browse/LUCENE-3237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Reporter: Shai Erera
> Attachments: LUCENE-3237.patch
>
>
> Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
> FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
> written to the file by other FileDescriptors. It would be better if we do 
> this operation on the actual RAF/FileOS which wrote the data. We can add 
> sync() to IndexOutput and FSIndexOutput will do that.
> Directory-wise, we should stop syncing on file names, and instead sync on the 
> IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965208#comment-13965208
 ] 

Uwe Schindler commented on LUCENE-5588:
---

Cool, thanks. Nice blog post! In fact our current patch should be fine then?

Should we commit it to trunk and branch_4x? I will also check MacOSX on my VM 
to validate if it also works on OSX, so i can modify the assert to check that 
the sync succeeds on OSX. Currently it only asserts on Linux that no errors 
occurred.

According to the blog post, windows does not work at all, so we are fine with 
the "optimization" (early exit).

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5588) We should also fsync the directory when committing

2014-04-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965185#comment-13965185
 ] 

Adrien Grand commented on LUCENE-5588:
--

bq. In fact. it also does not work on Linux, see 
http://permalink.gmane.org/gmane.comp.standards.posix.austin.general/6952

FYI, the same person who reported this bug wrote an interesting blog post about 
fsync at 
http://blog.httrack.com/blog/2013/11/15/everything-you-always-wanted-to-know-about-fsync/

> We should also fsync the directory when committing
> --
>
> Key: LUCENE-5588
> URL: https://issues.apache.org/jira/browse/LUCENE-5588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5588.patch, LUCENE-5588.patch
>
>
> Since we are on Java 7 now and we already fixed FSDir.sync to use FileChannel 
> (LUCENE-5570), we can also fsync the directory (at least try to do it). 
> Unlike RandomAccessFile, which must be a regular file, FileChannel.open() can 
> also open a directory: 
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-directory-with-nio-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965182#comment-13965182
 ] 

Simon Willnauer commented on LUCENE-3237:
-

Hey mike, thanks for reopening this. I like the patch since it fixes multiple 
issues. 

 * I like the fact that we get rid of the general unsynced files stuff in 
Directory.
 * given the last point we move it in the right place inside IW that is where 
it should be
 * the problem that the current patch has is that is holds on to the buffers in 
BufferedIndexOutput. I think we need to work around this here are a couple of 
ideas:
  **  introduce a SyncHandle class that we can pull from IndexOutput that 
allows to close the IndexOutput but lets you fsync after the fact
  ** this handle can be refcounted internally and we just decrement the count 
on IndexOutput#close() as well as on SyncHandle#close() 
  ** we can just hold on to the SyncHandle until we need to sync in IW 
  ** since this will basically close the underlying FD later we might want to 
think about size-bounding the number of unsynced files and maybe let indexing 
threads fsync them concurrently? maybe something we can do later.
  ** if we know we flush for commit we can already fsync directly which might 
safe resources / time since it might be concurrent

just a couple of ideas

> FSDirectory.fsync() may not work properly
> -
>
> Key: LUCENE-3237
> URL: https://issues.apache.org/jira/browse/LUCENE-3237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Reporter: Shai Erera
> Attachments: LUCENE-3237.patch
>
>
> Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
> FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
> written to the file by other FileDescriptors. It would be better if we do 
> this operation on the actual RAF/FileOS which wrote the data. We can add 
> sync() to IndexOutput and FSIndexOutput will do that.
> Directory-wise, we should stop syncing on file names, and instead sync on the 
> IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3237) FSDirectory.fsync() may not work properly

2014-04-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965162#comment-13965162
 ] 

Michael McCandless commented on LUCENE-3237:


bq. In fact, fsync syncs the whole file, because it relies on fsync() POSIX API 
or FlushFileBuffers() in Windows. Both really sync the file the descriptor is 
pointing to. Those functions don't sync the descriptor's buffers only.

This is my impression as well, and as Yonik said, it's hard to imagine any 
[sane] operating system doing it differently ... so this really is paranoia.

bq. {{FSDirectory.FSIndexOutput#sync()}} should call flush() before syncing the 
underlying file.

OK I'll move it there (I'm currently doing it in the first close "attempt").

bq. This does not do the for-loop we currently do to repeat the fsync 5 times 
if it fails.

I'll add an IOUtils.sync that takes an fd and does the retry thing.

bq. Also, I would not remove Directory.sync(), we should maybe leave this for 
LUCENE-5588 to sync the directory itsself.

Right, we should add it back, as a method taking no file args?  Its purpose 
would be LUCENE-5588.

> FSDirectory.fsync() may not work properly
> -
>
> Key: LUCENE-3237
> URL: https://issues.apache.org/jira/browse/LUCENE-3237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Reporter: Shai Erera
> Attachments: LUCENE-3237.patch
>
>
> Spinoff from LUCENE-3230. FSDirectory.fsync() opens a new RAF, sync() its 
> FileDescriptor and closes RAF. It is not clear that this syncs whatever was 
> written to the file by other FileDescriptors. It would be better if we do 
> this operation on the actual RAF/FileOS which wrote the data. We can add 
> sync() to IndexOutput and FSIndexOutput will do that.
> Directory-wise, we should stop syncing on file names, and instead sync on the 
> IOs that performed the write operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >