[jira] [Commented] (SOLR-6670) change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE

2014-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194245#comment-14194245
 ] 

ASF subversion and git services commented on SOLR-6670:
---

Commit 1636252 from [~erickoerickson] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1636252 ]

SOLR-6670: change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE

> change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE
> ---
>
> Key: SOLR-6670
> URL: https://issues.apache.org/jira/browse/SOLR-6670
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6670.patch
>
>
> JIRA for Jan's comments on SOLR-6513:
> I thought we agreed to prefer the term "shard" over "slice", so I think we 
> should do this for this API as well.
> The only place in our refguide we use the word "slice" is in How SolrCloud 
> Works [1] and that description is disputed.
> The refguide explanation of what a shard is can be found in Shards and 
> Indexing Data in SolrCloud [2], quoting:
> When your data is too large for one node, you can break it up and store it in 
> sections by creating one or more shards. Each is a portion of the logical 
> index, or core, and it's the set of all nodes containing that section of the 
> index.
> So I'm proposing a rename of this API to BALANCESHARDUNIQUE and a rewrite of 
> [1].
> [1] https://cwiki.apache.org/confluence/display/solr/How+SolrCloud+Works
> [2] 
> https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
> Note Mark's comment on that JIRA, but I think it would be best to continue to 
> talk about "shards" with user-facing operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6670) change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE

2014-11-02 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson closed SOLR-6670.

   Resolution: Fixed
Fix Version/s: Trunk
   5.0

thanks for pointing that out Jan!

> change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE
> ---
>
> Key: SOLR-6670
> URL: https://issues.apache.org/jira/browse/SOLR-6670
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6670.patch
>
>
> JIRA for Jan's comments on SOLR-6513:
> I thought we agreed to prefer the term "shard" over "slice", so I think we 
> should do this for this API as well.
> The only place in our refguide we use the word "slice" is in How SolrCloud 
> Works [1] and that description is disputed.
> The refguide explanation of what a shard is can be found in Shards and 
> Indexing Data in SolrCloud [2], quoting:
> When your data is too large for one node, you can break it up and store it in 
> sections by creating one or more shards. Each is a portion of the logical 
> index, or core, and it's the set of all nodes containing that section of the 
> index.
> So I'm proposing a rename of this API to BALANCESHARDUNIQUE and a rewrite of 
> [1].
> [1] https://cwiki.apache.org/confluence/display/solr/How+SolrCloud+Works
> [2] 
> https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
> Note Mark's comment on that JIRA, but I think it would be best to continue to 
> talk about "shards" with user-facing operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6037) PendingTerm cannot be cast to PendingBlock

2014-11-02 Thread zhanlijun (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194130#comment-14194130
 ] 

zhanlijun edited comment on LUCENE-6037 at 11/3/14 3:59 AM:


I found the cause of the problem. 
I have a application scenario that add a same document to the index by using 
indexwriter.addDocuments(). In order to improve the efficiency of indexing, I 
make the document into a static variable.  This way running very well in a 
single-threaded environment, however, when I use multiple-threads to operator 
indexwriter.addDocuments(), it cause the error. 
 
The solution: I make a new document for each thread, and the error would no 
longer be reappeared.


was (Author: zhanlijun):
I used multiple threads to add a single document (static variable), and it 
would cause this error.  After I corrected, the error would no longer be 
reappeared.

> PendingTerm cannot be cast to PendingBlock
> --
>
> Key: LUCENE-6037
> URL: https://issues.apache.org/jira/browse/LUCENE-6037
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 4.3.1
> Environment: ubuntu 64bit
>Reporter: zhanlijun
>Priority: Critical
> Fix For: 4.3.1
>
>
> the error as follows:
> java.lang.ClassCastException: 
> org.apache.lucene.codecs.BlockTreeTermsWriter$PendingTerm cannot be cast to 
> org.apache.lucene.codecs.BlockTreeTermsWriter$PendingBlock
> at 
> org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finish(BlockTreeTermsWriter.java:1014)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:553)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:480)
> at 
> org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:378)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:413)
> at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1283)
> at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1243)
> at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1228)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 674 - Still Failing

2014-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/674/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testDistribSearch

Error Message:
Shard split did not complete. Last recorded state: running 
expected:<[completed]> but was:<[running]>

Stack Trace:
org.junit.ComparisonFailure: Shard split did not complete. Last recorded state: 
running expected:<[completed]> but was:<[running]>
at 
__randomizedtesting.SeedInfo.seed([80B8668EE9622AC1:15EE8969E3D4AFD]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls(CollectionsAPIAsyncDistributedZkTest.java:114)
at 
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.doTest(CollectionsAPIAsyncDistributedZkTest.java:64)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapt

Re: History question: contribution from Solr to Lucene

2014-11-02 Thread Alexandre Rafalovitch
Thanks David,

That's perfect for my needs.

Regards,
   Alex.
P.s. I do check the blog from time to time, but somehow missed this one
P.p.s. Oh Google Reader with offline mode, why thou dost abandon me in
these days of (still ongoing) need!

Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 2 November 2014 22:45, david.w.smi...@gmail.com
 wrote:
> Alex,
> You should follow Yonik’s blog (Heliosearch), he has a post on this subject,
> more or less:
> http://heliosearch.org/lucene-solr-history/
>
> ~ David Smiley
> Freelance Apache Lucene/Solr Search Consultant/Developer
> http://www.linkedin.com/in/davidwsmiley
>
> On Sun, Nov 2, 2014 at 8:36 PM, Alexandre Rafalovitch 
> wrote:
>>
>> Hi,
>>
>> I am trying to understand what used to be in Solr pre-merge and got
>> moved into Lucene packages after the projects merged. For example
>> analyzers/tokenizers, were they always in Lucene or all originally in
>> Solr?
>>
>> I am not sure where to check this quickly, so I am hoping people can
>> do a short history or a good URL.
>>
>> Regards,
>>Alex.
>>
>> Personal: http://www.outerthoughts.com/ and @arafalov
>> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
>> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: History question: contribution from Solr to Lucene

2014-11-02 Thread david.w.smi...@gmail.com
Alex,
You should follow Yonik’s blog (Heliosearch), he has a post on this
subject, more or less:
http://heliosearch.org/lucene-solr-history/

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Sun, Nov 2, 2014 at 8:36 PM, Alexandre Rafalovitch 
wrote:

> Hi,
>
> I am trying to understand what used to be in Solr pre-merge and got
> moved into Lucene packages after the projects merged. For example
> analyzers/tokenizers, were they always in Lucene or all originally in
> Solr?
>
> I am not sure where to check this quickly, so I am hoping people can
> do a short history or a good URL.
>
> Regards,
>Alex.
>
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-6670) change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE

2014-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194169#comment-14194169
 ] 

ASF subversion and git services commented on SOLR-6670:
---

Commit 1636226 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1636226 ]

SOLR-6670: change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE

> change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE
> ---
>
> Key: SOLR-6670
> URL: https://issues.apache.org/jira/browse/SOLR-6670
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-6670.patch
>
>
> JIRA for Jan's comments on SOLR-6513:
> I thought we agreed to prefer the term "shard" over "slice", so I think we 
> should do this for this API as well.
> The only place in our refguide we use the word "slice" is in How SolrCloud 
> Works [1] and that description is disputed.
> The refguide explanation of what a shard is can be found in Shards and 
> Indexing Data in SolrCloud [2], quoting:
> When your data is too large for one node, you can break it up and store it in 
> sections by creating one or more shards. Each is a portion of the logical 
> index, or core, and it's the set of all nodes containing that section of the 
> index.
> So I'm proposing a rename of this API to BALANCESHARDUNIQUE and a rewrite of 
> [1].
> [1] https://cwiki.apache.org/confluence/display/solr/How+SolrCloud+Works
> [2] 
> https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
> Note Mark's comment on that JIRA, but I think it would be best to continue to 
> talk about "shards" with user-facing operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6670) change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE

2014-11-02 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6670:
-
Attachment: SOLR-6670.patch

renames BANALCESLICEUNIQUE to BALANCESHARDUNIQUE. Also, the "sliceUnique" 
parameter for ADDREPLICAPROP is now shardUnique.

> change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE
> ---
>
> Key: SOLR-6670
> URL: https://issues.apache.org/jira/browse/SOLR-6670
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-6670.patch
>
>
> JIRA for Jan's comments on SOLR-6513:
> I thought we agreed to prefer the term "shard" over "slice", so I think we 
> should do this for this API as well.
> The only place in our refguide we use the word "slice" is in How SolrCloud 
> Works [1] and that description is disputed.
> The refguide explanation of what a shard is can be found in Shards and 
> Indexing Data in SolrCloud [2], quoting:
> When your data is too large for one node, you can break it up and store it in 
> sections by creating one or more shards. Each is a portion of the logical 
> index, or core, and it's the set of all nodes containing that section of the 
> index.
> So I'm proposing a rename of this API to BALANCESHARDUNIQUE and a rewrite of 
> [1].
> [1] https://cwiki.apache.org/confluence/display/solr/How+SolrCloud+Works
> [2] 
> https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
> Note Mark's comment on that JIRA, but I think it would be best to continue to 
> talk about "shards" with user-facing operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



History question: contribution from Solr to Lucene

2014-11-02 Thread Alexandre Rafalovitch
Hi,

I am trying to understand what used to be in Solr pre-merge and got
moved into Lucene packages after the projects merged. For example
analyzers/tokenizers, were they always in Lucene or all originally in
Solr?

I am not sure where to check this quickly, so I am hoping people can
do a short history or a good URL.

Regards,
   Alex.

Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: An experience and some thoughts about solr/example -> solr/server

2014-11-02 Thread Erick Erickson
I have to say I like "the new  way" of doing things. I'm sooo
tired of maintaining a bunch of command-line files that copy everything
from example to node1, node2, node3 then starting things up in 4
separate windows (or 6 recently) and... Hmmm, looks kind of similar to
what the new command does all for me...

I'm a little discomfited by having to learn new stuff, but that's "a personal
problem" ;).

I do think we have to be mindful of people who want something like what Shawn
was doing, I do this all the time as well. And of new people who haven't a clue.
Hmmm, actually new folks might have an easier time of it since they don't
have any expectations ;).

bq: "...'run example' target that could also fire off a create for collection1."

Exactly, with a note (perhaps in the help for this command) about where the
config files are located that are used. Perhaps with a 'clean' option that
blows away the current data directory and (if Zookeeper becomes the one
source of truth) does an upconfig first.

For me, the goal here is to be up and running as fast as I could be in the old
way of doing things, i.e.
1> cd to 
2> execute the command 
3> go into exampledocs and type 'java -jar post.jar *.xml"
4> search

The current questions (mine as well) are, as Mark says, that I'd
expect with such
a fundamental change. Now that it's checked in, people will be trying
all sorts of
things and uncovering nooks and crannies.

So let's have an umbrella JIRA that we collect things in and we can
fix what we find
as we go. I'll create one if there isn't one already, let me know.

Erick

On Sun, Nov 2, 2014 at 4:06 PM, Alexandre Rafalovitch
 wrote:
> That's interesting. I did not realize we were going away from
> ElasticSearch on that.
>
> So, do we need to update the tutorial or some other super-obvious way
> of what the next step is? (I haven't checked). Because one difference
> between Solr and the Database is that create table is a standard SQL
> command used in any database (for the basic use case). Whereas Solr is
> a unique snowflake and we cannot expect any pre-existing knowledge
> transfer.
>
> Regards,
>Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On 2 November 2014 18:06, Mark Miller  wrote:
>> I'm sure after such a large change there are some things to smooth over.
>>
>> As far as starting with no cores or collections, I very strongly think that
>> is the way to go.
>>
>> I've been working with a Solr checkout designed this way for a long time and
>> I tend to just keep a text file of common cmd line entries around, one of
>> which is a simple curl command to create collection1 and the corresponding
>> command to remove it.
>>
>> Something that might make things a bit easier could also be an alternate
>> developer 'run example' target that could also fire off a create for
>> collection1.
>>
>> For other cases, I'd think of it like a database example - step one is to
>> create a table named foo, not dive in on the built in table1.
>>
>> - Mark
>>
>> On Sun Nov 02 2014 at 4:34:57 PM Shawn Heisey  wrote:
>>>
>>> I hope this won't serve to interrupt the momentum for SOLR-3619 and
>>> related work, just perhaps influence the direction.  What I'm going to
>>> relate here probably is no surprise to anyone involved with the effort.
>>>
>>> In response to a user's question on IRC, I wanted to take their
>>> fieldType, incorporate it into the main example on my source code
>>> checkout, and fire up the example so I could poke around on the analysis
>>> tab.
>>>
>>> I only had branch_5x, and it was a clean checkout, so I did "svn up"
>>> followed by "ant example" and got to work.  The first thing I discovered
>>> is that there's no longer a conf directory in example/solr/collection1.
>>> I poked around for a bit, found what looked like a likely candidate
>>> config, and modified the schema.xml.  Then I poked around a bit more and
>>> learned that "bin/solr start" was what I need to use to get it running.
>>>
>>> I was surprised to see that when Solr started, there were no cores
>>> loaded at all.  Thinking about all the discussions around this topic,
>>> this makes a lot of sense ... but it does make it hard to implement what
>>> I typically use the example for, which is quick tests of small
>>> config/schema changes or user-provided scenarios from IRC or the mailing
>>> list.
>>>
>>> I think the README or other documentation should probably cover exactly
>>> what to do if your intent is to use collection1, modify it, and poke
>>> around.
>>>
>>> Separately, I noticed that there are a lot of java options used to start
>>> the server, including an increase to PermSize.  In all my time using
>>> Solr, I've never had to change that.  Do we have common problems with
>>> the new startup script and solr version that require it?
>>>
>>> Thanks,
>>> Shawn
>>>

Re: An experience and some thoughts about solr/example -> solr/server

2014-11-02 Thread Tomás Fernández Löbbe
My understanding is that you can run the examples, but you have to
specifically ask for one using the "-e" argument, like:
./solr start -e techproducts

That said, I'm trying it right now and It's failing and giving confusing
messages (Errors followed by success):

a82066179bf9:bin tflobbe$ ./solr start -e techproducts
Waiting to see Solr listening on port 8983 [|]  Still not seeing Solr
listening on 8983 after 30 seconds!
tail:
/Users/tflobbe/Documents/apache/lucene-solr-trunk-commit/solr/server/logs/solr.log:
No such file or directory
Creating new core using command:
http://localhost:8983/solr/admin/cores?action=CREATE&name=techproducts&configSet=sample_techproducts_configs

WARN  - 2014-11-02 16:32:24.486; org.apache.solr.util.SolrCLI; Request to
http://localhost:8983/solr/admin/cores?action=CREATE&name=techproducts&configSet=sample_techproducts_configs
failed due to: Connection refused, sleeping for 5 seconds before re-trying
the request ...
Exception in thread "main" java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:117)
at
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:178)
at
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
at
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
at
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:214)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:160)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:136)
at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:460)
at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:413)
at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:423)
at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:399)
at org.apache.solr.util.SolrCLI$ApiTool.runTool(SolrCLI.java:692)
at org.apache.solr.util.SolrCLI.main(SolrCLI.java:185)
Indexing tech product example docs from
/Users/tflobbe/Documents/apache/lucene-solr-trunk-commit/solr/example/exampledocs
SimplePostTool version 1.5
Posting files to base url http://localhost:8983/solr/techproducts/update
using content-type application/xml..
POSTing file gb18030-example.xml
SimplePostTool: FATAL: Connection error (is Solr running at
http://localhost:8983/solr/techproducts/update ?):
java.net.ConnectException: Connection refused

Solr techproducts example launched successfully. Direct your Web browser to
http://localhost:8983/solr to visit the Solr Admin UI


Tomás

On Sun, Nov 2, 2014 at 4:06 PM, Alexandre Rafalovitch 
wrote:

> That's interesting. I did not realize we were going away from
> ElasticSearch on that.
>
> So, do we need to update the tutorial or some other super-obvious way
> of what the next step is? (I haven't checked). Because one difference
> between Solr and the Database is that create table is a standard SQL
> command used in any database (for the basic use case). Whereas Solr is
> a unique snowflake and we cannot expect any pre-existing knowledge
> transfer.
>
> Regards,
>Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On 2 November 2014 18:06, Mark Miller  wrote:
> > I'm sure after such a large change there are some things to smooth over.
> >
> > As far as starting with no cores or collections, I very strongly think
> that
> > is the way to go.
> >
> > I've been working with a Solr checkout designed this way for a long time
> and
> > I tend to just keep a text file of common cmd line entries around, one of
> > which is a simple curl command to create collection1 and the
> corresponding
> > command to remove it.
> >
> > Something that might make things a bit easier could also be an alternate
> > developer 'run example' target that could also fire off a create for
> > collection1.
> >
> > For other cases, I'd think of it like a database example - step one is to
> > create a table named foo, not dive in on the built in table1.

[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 216 - Still Failing

2014-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/216/

No tests ran.

Build Log:
[...truncated 51306 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
"file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (8.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.0-src.tgz...
   [smoker] 27.5 MB in 0.04 sec (676.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.tgz...
   [smoker] 63.2 MB in 0.10 sec (663.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.zip...
   [smoker] 72.4 MB in 0.10 sec (724.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5410 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5410 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.disableHdfs=true -Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 207 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (84.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.0.0-src.tgz...
   [smoker] 33.8 MB in 0.05 sec (649.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.tgz...
   [smoker] 147.9 MB in 0.24 sec (603.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.zip...
   [smoker] 154.0 MB in 0.24 sec (649.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.0.0.tgz...
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1522, in 
   [smoker] main()
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1467, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1511, in smokeTest
   [smoker] unpackAndVerify(java, 'solr', tmpDir, artifact, svnRevision, 
version, testArgs, baseURL)
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 616, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 763, in verifyUnpacked
   [smoker] checkAllJARs(os.getcwd(), project, svnRevision, version, 
tmpDir, baseURL)
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 255, in checkAllJARs
   [smoker] noJavaPackageClasses('JAR file "%s"' % fullPa

[jira] [Commented] (LUCENE-6037) PendingTerm cannot be cast to PendingBlock

2014-11-02 Thread zhanlijun (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194130#comment-14194130
 ] 

zhanlijun commented on LUCENE-6037:
---

I used multiple threads to add a single document (static variable), and it 
would cause this error.  After I corrected, the error would no longer be 
reappeared.

> PendingTerm cannot be cast to PendingBlock
> --
>
> Key: LUCENE-6037
> URL: https://issues.apache.org/jira/browse/LUCENE-6037
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 4.3.1
> Environment: ubuntu 64bit
>Reporter: zhanlijun
>Priority: Critical
> Fix For: 4.3.1
>
>
> the error as follows:
> java.lang.ClassCastException: 
> org.apache.lucene.codecs.BlockTreeTermsWriter$PendingTerm cannot be cast to 
> org.apache.lucene.codecs.BlockTreeTermsWriter$PendingBlock
> at 
> org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finish(BlockTreeTermsWriter.java:1014)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:553)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:480)
> at 
> org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:378)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:413)
> at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1283)
> at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1243)
> at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1228)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: An experience and some thoughts about solr/example -> solr/server

2014-11-02 Thread Alexandre Rafalovitch
That's interesting. I did not realize we were going away from
ElasticSearch on that.

So, do we need to update the tutorial or some other super-obvious way
of what the next step is? (I haven't checked). Because one difference
between Solr and the Database is that create table is a standard SQL
command used in any database (for the basic use case). Whereas Solr is
a unique snowflake and we cannot expect any pre-existing knowledge
transfer.

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 2 November 2014 18:06, Mark Miller  wrote:
> I'm sure after such a large change there are some things to smooth over.
>
> As far as starting with no cores or collections, I very strongly think that
> is the way to go.
>
> I've been working with a Solr checkout designed this way for a long time and
> I tend to just keep a text file of common cmd line entries around, one of
> which is a simple curl command to create collection1 and the corresponding
> command to remove it.
>
> Something that might make things a bit easier could also be an alternate
> developer 'run example' target that could also fire off a create for
> collection1.
>
> For other cases, I'd think of it like a database example - step one is to
> create a table named foo, not dive in on the built in table1.
>
> - Mark
>
> On Sun Nov 02 2014 at 4:34:57 PM Shawn Heisey  wrote:
>>
>> I hope this won't serve to interrupt the momentum for SOLR-3619 and
>> related work, just perhaps influence the direction.  What I'm going to
>> relate here probably is no surprise to anyone involved with the effort.
>>
>> In response to a user's question on IRC, I wanted to take their
>> fieldType, incorporate it into the main example on my source code
>> checkout, and fire up the example so I could poke around on the analysis
>> tab.
>>
>> I only had branch_5x, and it was a clean checkout, so I did "svn up"
>> followed by "ant example" and got to work.  The first thing I discovered
>> is that there's no longer a conf directory in example/solr/collection1.
>> I poked around for a bit, found what looked like a likely candidate
>> config, and modified the schema.xml.  Then I poked around a bit more and
>> learned that "bin/solr start" was what I need to use to get it running.
>>
>> I was surprised to see that when Solr started, there were no cores
>> loaded at all.  Thinking about all the discussions around this topic,
>> this makes a lot of sense ... but it does make it hard to implement what
>> I typically use the example for, which is quick tests of small
>> config/schema changes or user-provided scenarios from IRC or the mailing
>> list.
>>
>> I think the README or other documentation should probably cover exactly
>> what to do if your intent is to use collection1, modify it, and poke
>> around.
>>
>> Separately, I noticed that there are a lot of java options used to start
>> the server, including an increase to PermSize.  In all my time using
>> Solr, I've never had to change that.  Do we have common problems with
>> the new startup script and solr version that require it?
>>
>> Thanks,
>> Shawn
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6041) remove sugar FieldInfo.isIndexed and .hasDocValues

2014-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194105#comment-14194105
 ] 

ASF subversion and git services commented on LUCENE-6041:
-

Commit 1636218 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1636218 ]

LUCENE-6041: remove FieldInfo.isIndex/hasDocValues sugar APIs

> remove sugar FieldInfo.isIndexed and .hasDocValues
> --
>
> Key: LUCENE-6041
> URL: https://issues.apache.org/jira/browse/LUCENE-6041
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6041.patch
>
>
> Follow-on from LUCENE-6039; these two booleans don't really exist: they are 
> just sugar to check for IndexOptions.NO and DocValuesType.NO.  I think for 
> the low-level schema API in Lucene we should not expose such sugar: callers 
> should have to be explicit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6041) remove sugar FieldInfo.isIndexed and .hasDocValues

2014-11-02 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6041.

Resolution: Fixed

> remove sugar FieldInfo.isIndexed and .hasDocValues
> --
>
> Key: LUCENE-6041
> URL: https://issues.apache.org/jira/browse/LUCENE-6041
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6041.patch
>
>
> Follow-on from LUCENE-6039; these two booleans don't really exist: they are 
> just sugar to check for IndexOptions.NO and DocValuesType.NO.  I think for 
> the low-level schema API in Lucene we should not expose such sugar: callers 
> should have to be explicit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: An experience and some thoughts about solr/example -> solr/server

2014-11-02 Thread Mark Miller
I'm sure after such a large change there are some things to smooth over.

As far as starting with no cores or collections, I very strongly think that
is the way to go.

I've been working with a Solr checkout designed this way for a long time
and I tend to just keep a text file of common cmd line entries around, one
of which is a simple curl command to create collection1 and the
corresponding command to remove it.

Something that might make things a bit easier could also be an alternate
developer 'run example' target that could also fire off a create for
collection1.

For other cases, I'd think of it like a database example - step one is to
create a table named foo, not dive in on the built in table1.

- Mark

On Sun Nov 02 2014 at 4:34:57 PM Shawn Heisey  wrote:

> I hope this won't serve to interrupt the momentum for SOLR-3619 and
> related work, just perhaps influence the direction.  What I'm going to
> relate here probably is no surprise to anyone involved with the effort.
>
> In response to a user's question on IRC, I wanted to take their
> fieldType, incorporate it into the main example on my source code
> checkout, and fire up the example so I could poke around on the analysis
> tab.
>
> I only had branch_5x, and it was a clean checkout, so I did "svn up"
> followed by "ant example" and got to work.  The first thing I discovered
> is that there's no longer a conf directory in example/solr/collection1.
> I poked around for a bit, found what looked like a likely candidate
> config, and modified the schema.xml.  Then I poked around a bit more and
> learned that "bin/solr start" was what I need to use to get it running.
>
> I was surprised to see that when Solr started, there were no cores
> loaded at all.  Thinking about all the discussions around this topic,
> this makes a lot of sense ... but it does make it hard to implement what
> I typically use the example for, which is quick tests of small
> config/schema changes or user-provided scenarios from IRC or the mailing
> list.
>
> I think the README or other documentation should probably cover exactly
> what to do if your intent is to use collection1, modify it, and poke
> around.
>
> Separately, I noticed that there are a lot of java options used to start
> the server, including an increase to PermSize.  In all my time using
> Solr, I've never had to change that.  Do we have common problems with
> the new startup script and solr version that require it?
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_67) - Build # 11557 - Failure!

2014-11-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11557/
Java: 64bit/jdk1.7.0_67 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: true)

1 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryZkTest.testDistribSearch

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([69E7D53997CC38FD:E8015B21E09358C1]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at 
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at 
org.eclipse.jetty.server.ssl.SslSelectChannelConnector.doStart(SslSelectChannelConnector.java:631)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:291)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:418)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:391)
at org.apache.solr.cloud.RecoveryZkTest.doTest(RecoveryZkTest.java:93)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat

An experience and some thoughts about solr/example -> solr/server

2014-11-02 Thread Shawn Heisey
I hope this won't serve to interrupt the momentum for SOLR-3619 and
related work, just perhaps influence the direction.  What I'm going to
relate here probably is no surprise to anyone involved with the effort.

In response to a user's question on IRC, I wanted to take their
fieldType, incorporate it into the main example on my source code
checkout, and fire up the example so I could poke around on the analysis
tab.

I only had branch_5x, and it was a clean checkout, so I did "svn up"
followed by "ant example" and got to work.  The first thing I discovered
is that there's no longer a conf directory in example/solr/collection1. 
I poked around for a bit, found what looked like a likely candidate
config, and modified the schema.xml.  Then I poked around a bit more and
learned that "bin/solr start" was what I need to use to get it running.

I was surprised to see that when Solr started, there were no cores
loaded at all.  Thinking about all the discussions around this topic,
this makes a lot of sense ... but it does make it hard to implement what
I typically use the example for, which is quick tests of small
config/schema changes or user-provided scenarios from IRC or the mailing
list.

I think the README or other documentation should probably cover exactly
what to do if your intent is to use collection1, modify it, and poke around.

Separately, I noticed that there are a lot of java options used to start
the server, including an increase to PermSize.  In all my time using
Solr, I've never had to change that.  Do we have common problems with
the new startup script and solr version that require it?

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1251: POMs out of sync

2014-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1251/

2 tests failed.
FAILED:  
org.apache.solr.cloud.HttpPartitionTest.org.apache.solr.cloud.HttpPartitionTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest: 
   1) Thread[id=28971, name=Thread-8218, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:465)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1639)
at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:430)
at org.apache.solr.cloud.ZkController.access$100(ZkController.java:101)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:269)
at 
org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.HttpPartitionTest: 
   1) Thread[id=28971, name=Thread-8218, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
at 
org.apache.http.impl.client.AbstractHttpClient.doE

[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-11-02 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6351:
---
Attachment: SOLR-6351.patch

Added "whitebox" DistributedFacetPivotWhiteBoxTest test simulating pivot stats 
shard requests in cases: get top level pivots and refinement requests. Both 
contains stats on pivots. 

> Let Stats Hang off of Pivots (via 'tag')
> 
>
> Key: SOLR-6351
> URL: https://issues.apache.org/jira/browse/SOLR-6351
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch
>
>
> he goal here is basically flip the notion of "stats.facet" on it's head, so 
> that instead of asking the stats component to also do some faceting 
> (something that's never worked well with the variety of field types and has 
> never worked in distributed mode) we instead ask the PivotFacet code to 
> compute some stats X for each leaf in a pivot.  We'll do this with the 
> existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
> of the {{stats.field}} instances to be able to associate which stats we want 
> hanging off of which {{facet.pivot}}
> Example...
> {noformat}
> facet.pivot={!stats=s1}category,manufacturer
> stats.field={!key=avg_price tag=s1 mean=true}price
> stats.field={!tag=s1 min=true max=true}user_rating
> {noformat}
> ...with the request above, in addition to computing the min/max user_rating 
> and mean price (labeled "avg_price") over the entire result set, the 
> PivotFacet component will also include those stats for every node of the tree 
> it builds up when generating a pivot of the fields "category,manufacturer"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6670) change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE

2014-11-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194009#comment-14194009
 ] 

Erick Erickson commented on SOLR-6670:
--

I don't see any problem with changing this even though it'll change the names 
of a couple of parameters to the collections API since it hasn't been released 
yet.

I'll be checking these changes in today probably.


> change BALANCESLICEUNIQUE to BALANCESHARDUNIQUE
> ---
>
> Key: SOLR-6670
> URL: https://issues.apache.org/jira/browse/SOLR-6670
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> JIRA for Jan's comments on SOLR-6513:
> I thought we agreed to prefer the term "shard" over "slice", so I think we 
> should do this for this API as well.
> The only place in our refguide we use the word "slice" is in How SolrCloud 
> Works [1] and that description is disputed.
> The refguide explanation of what a shard is can be found in Shards and 
> Indexing Data in SolrCloud [2], quoting:
> When your data is too large for one node, you can break it up and store it in 
> sections by creating one or more shards. Each is a portion of the logical 
> index, or core, and it's the set of all nodes containing that section of the 
> index.
> So I'm proposing a rename of this API to BALANCESHARDUNIQUE and a rewrite of 
> [1].
> [1] https://cwiki.apache.org/confluence/display/solr/How+SolrCloud+Works
> [2] 
> https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
> Note Mark's comment on that JIRA, but I think it would be best to continue to 
> talk about "shards" with user-facing operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6681) remove configurations from solrconfig.xml and eliminate per core class loading

2014-11-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193974#comment-14193974
 ] 

Jan Høydahl commented on SOLR-6681:
---

+1 to the overall idea

Long-term solution would be a better plugin architecture to keep dependencies 
for certain features in self-contained zip files, see SOLR-5103

Perhaps a short-term improvement if is Solr parses {{$SOLRHOME/lib}} 
recursively for jars, then users could freely organize their jars in sub 
folders and thus keep track of what jars belong together, for what feature etc

> remove  configurations from solrconfig.xml and eliminate per core class 
> loading
> 
>
> Key: SOLR-6681
> URL: https://issues.apache.org/jira/browse/SOLR-6681
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>
> As Solr moves more towards cloud ,solrconfig is stored in Zookeeper. Storing 
> the local library information in a file stored in a remote and common 
> location for all nodes make no sense. 
> In this new world, cores are created and managed by the solrcloud system and  
> there is no need to have separate classloading for each core and a lot of 
> unnecessary classloading issues in solr can go away
> Going forward, all cores in a node will have only one classpath. We may 
> define a standard directory such as 'ext' under the HOME and let users store 
> all their jars there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2014-11-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193973#comment-14193973
 ] 

Jan Høydahl commented on SOLR-5103:
---

Also see from mailing list: http://search-lucene.com/m/WwzTb2jQWpl1

> Plugin Improvements
> ---
>
> Key: SOLR-5103
> URL: https://issues.apache.org/jira/browse/SOLR-5103
> Project: Solr
>  Issue Type: Improvement
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Fix For: Trunk
>
>
> I think for 5.0, we should make it easier to add plugins by defining a plugin 
> package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
> that can be easily installed (even from the UI!) and configured 
> programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: NSF DataViz Hackathon for Polar CyberInfrastructure: New York, NY 11/3/2014 - 11/4/2014 Call for Remote Participation

2014-11-02 Thread Tom Barber

Looks like a great event!

On 02/11/14 17:37, Mattmann, Chris A (3980) wrote:

Call for Remote Participation

The NSF DataViz Hackathon for Polar CyberInfrastructure will bring
together Polar researchers, Cyber Infrastructure experts, Data
Visualiation experts, and members of the community interested in
connecting technology, science and communication.

The Hackathon website is located at:

http://nsf-polar-cyberinfrastructure.github.io/datavis-hackathon/

The Hackathon will take place tomorrow, Monday, November 3, 2014
and Tuesday November 4, 2014, beginning at 9am ET and located at:

The Orozco Room Parsons - The New School 66 W 12th St, 7th floor
(Between Fifth Avenue and Sixth Avenue) New York, NY 10011

Though onsite attendance is closed, we are offering remote participation
in the meeting via our Github repository:

https://github.com/NSF-Polar-Cyberinfrastructure/datavis-hackathon

You can participate in the meeting by reviewing, commenting and
providing your feedback on our current sessions, located at:

https://github.com/NSF-Polar-Cyberinfrastructure/datavis-hackathon/issues

Please note that a Github account is required to participate.

We look forward to your remote participation and to the hackathon
and its results!

Cheers,
Chris Mattmann


++
Chris Mattmann, Ph.D.
Chief Architect
Instrument Software and Science Data Systems Section (398)
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 168-519, Mailstop: 168-527
Email: chris.a.mattm...@nasa.gov
WWW:  http://sunset.usc.edu/~mattmann/
++
Adjunct Associate Professor, Computer Science Department
University of Southern California, Los Angeles, CA 90089 USA
++







--
*Tom Barber* | Technical Director

meteorite bi
*T:* +44 20 8133 3730
*W:* www.meteorite.bi | *Skype:* meteorite.consulting
*A:* Surrey Technology Centre, Surrey Research Park, Guildford, GU2 7YG, UK


[jira] [Commented] (LUCENE-6041) remove sugar FieldInfo.isIndexed and .hasDocValues

2014-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193962#comment-14193962
 ] 

ASF subversion and git services commented on LUCENE-6041:
-

Commit 1636166 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1636166 ]

LUCENE-6041: remove FieldInfo.isIndex/hasDocValues sugar APIs

> remove sugar FieldInfo.isIndexed and .hasDocValues
> --
>
> Key: LUCENE-6041
> URL: https://issues.apache.org/jira/browse/LUCENE-6041
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6041.patch
>
>
> Follow-on from LUCENE-6039; these two booleans don't really exist: they are 
> just sugar to check for IndexOptions.NO and DocValuesType.NO.  I think for 
> the low-level schema API in Lucene we should not expose such sugar: callers 
> should have to be explicit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6184) Replication fetchLatestIndex always failed, that will occur the recovery error.

2014-11-02 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193955#comment-14193955
 ] 

Ishan Chattopadhyaya commented on SOLR-6184:


[~raintung] Did you try increasing the commitReserveDuration parameter? 
Reserving a commit point would ensure that the index files corresponding to the 
latest commit point being fetched won't be deleted (due to, for example, lucene 
segment merges). 

Since it takes ~20 minutes to fetch the index, could you try setting this to 
~20-25 minutes, maybe?

> Replication fetchLatestIndex always failed, that will occur the recovery 
> error.
> ---
>
> Key: SOLR-6184
> URL: https://issues.apache.org/jira/browse/SOLR-6184
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6, 4.6.1
> Environment: the index file size is more than 70G
>Reporter: Raintung Li
>  Labels: difficulty-medium, impact-medium
> Attachments: Solr-6184.txt
>
>
> Usually the copy full index 70G need 20 minutes at least, 100M read/write 
> network or disk r/w.  If in the 20 minutes happen one hard commit, that means 
> the copy full index snap pull will be failed, the temp folder will be removed 
> because it is failed pull task. 
> In the production, update index will happen in every minute, redo pull task 
> always failed because index always change. 
> And also always redo the pull it will occur the network and disk usage keep 
> the high level.
> For my suggestion, the fetchLatestIndex can be do again in some frequency. 
> Don't need remove the tmp folder, and copy the largest index at first. Redo 
> the fetchLatestIndex don't download the same biggest file again, only will 
> copy the commit index just now, at last the task will be easy success.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_40-ea-b09) - Build # 4306 - Still Failing!

2014-11-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4306/
Java: 32bit/jdk1.8.0_40-ea-b09 -server -XX:+UseSerialGC (asserts: false)

1 tests failed.
REGRESSION:  org.apache.solr.TestDistributedSearch.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:65279/aa/g

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:65279/aa/g
at 
__randomizedtesting.SeedInfo.seed([363C8186B2D84C41:B7DA0F9EC5872C7D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:581)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.solr.BaseDistributedSearchTestCase.indexDoc(BaseDistributedSearchTestCase.java:438)
at 
org.apache.solr.BaseDistributedSearchTestCase.indexr(BaseDistributedSearchTestCase.java:420)
at 
org.apache.solr.TestDistributedSearch.doTest(TestDistributedSearch.java:120)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:875)
at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.

[jira] [Commented] (SOLR-6681) remove configurations from solrconfig.xml and eliminate per core class loading

2014-11-02 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193941#comment-14193941
 ] 

Shawn Heisey commented on SOLR-6681:


I think it's a good idea to get rid of  configuration tags and put all 
jars in one place.  How to manage that in the example without having duplicates 
of jars already present in "dist" (and making the already bulky download even 
larger) is one of the thorny problems we face.  The bin/solr script could copy 
or move jars the first time it runs ... which might open a whole new set of 
problems, particularly when it is upgrade time.

I don't think we need to use an "ext" directory.  Solr already has a "default" 
classpath that gets loaded without any configuration -- SOLRHOME/lib.  Whether 
or not we rename that from lib to ext is something we could bikeshed about 
forever, so I'm inclined to go with inertia and leave it alone.

SOLR-4852 describes some weird and irritating problems I've had related to this 
lib directory.


> remove  configurations from solrconfig.xml and eliminate per core class 
> loading
> 
>
> Key: SOLR-6681
> URL: https://issues.apache.org/jira/browse/SOLR-6681
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>
> As Solr moves more towards cloud ,solrconfig is stored in Zookeeper. Storing 
> the local library information in a file stored in a remote and common 
> location for all nodes make no sense. 
> In this new world, cores are created and managed by the solrcloud system and  
> there is no need to have separate classloading for each core and a lot of 
> unnecessary classloading issues in solr can go away
> Going forward, all cores in a node will have only one classpath. We may 
> define a standard directory such as 'ext' under the HOME and let users store 
> all their jars there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6517) CollectionsAPI call REBALANCELEADERS

2014-11-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193930#comment-14193930
 ] 

Erick Erickson commented on SOLR-6517:
--

Yep. I'm not sure what you're concerned about. Do you see a problem?

REBALANCELEADERS   does, indeed, do just that, there's no real magic here.

Look at the code in ZkController.register if you're wondering how 
preferredLeaders join at the head of the queue.

> CollectionsAPI call REBALANCELEADERS
> 
>
> Key: SOLR-6517
> URL: https://issues.apache.org/jira/browse/SOLR-6517
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.0, Trunk
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6517.patch, SOLR-6517.patch, SOLR-6517.patch
>
>
> Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are 
> assigned, there has to be a command "make it so Mr. Solr". This is something 
> of a placeholder to collect ideas. One wouldn't want to flood the system with 
> hundreds of re-assignments at once. Should this be synchronous or asnych? 
> Should it make the best attempt but not worry about perfection? Should it???
> a collection=name parameter would be required and it would re-elect all the 
> leaders that were on the 'wrong' node
> I'm thinking an optionally allowing one to specify a shard in the case where 
> you wanted to make a very specific change. Note that there's no need to 
> specify a particular replica, since there should be only a single 
> preferredLeader per slice.
> This command would do nothing to any slice that did not have a replica with a 
> preferredLeader role. Likewise it would do nothing if the slice in question 
> already had the leader role assigned to the node with the preferredLeader 
> role.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



NSF DataViz Hackathon for Polar CyberInfrastructure: New York, NY 11/3/2014 - 11/4/2014 Call for Remote Participation

2014-11-02 Thread Mattmann, Chris A (3980)
Call for Remote Participation

The NSF DataViz Hackathon for Polar CyberInfrastructure will bring
together Polar researchers, Cyber Infrastructure experts, Data
Visualiation experts, and members of the community interested in
connecting technology, science and communication.

The Hackathon website is located at:

http://nsf-polar-cyberinfrastructure.github.io/datavis-hackathon/

The Hackathon will take place tomorrow, Monday, November 3, 2014
and Tuesday November 4, 2014, beginning at 9am ET and located at:

The Orozco Room Parsons - The New School 66 W 12th St, 7th floor
(Between Fifth Avenue and Sixth Avenue) New York, NY 10011

Though onsite attendance is closed, we are offering remote participation
in the meeting via our Github repository:

https://github.com/NSF-Polar-Cyberinfrastructure/datavis-hackathon

You can participate in the meeting by reviewing, commenting and
providing your feedback on our current sessions, located at:

https://github.com/NSF-Polar-Cyberinfrastructure/datavis-hackathon/issues

Please note that a Github account is required to participate.

We look forward to your remote participation and to the hackathon
and its results!

Cheers, 
Chris Mattmann


++
Chris Mattmann, Ph.D.
Chief Architect
Instrument Software and Science Data Systems Section (398)
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 168-519, Mailstop: 168-527
Email: chris.a.mattm...@nasa.gov
WWW:  http://sunset.usc.edu/~mattmann/
++
Adjunct Associate Professor, Computer Science Department
University of Southern California, Los Angeles, CA 90089 USA
++





-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 665 - Still Failing

2014-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/665/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([6CFAFC4E7B581D4D:ED1C72560C077D71]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evalua

[jira] [Updated] (LUCENE-6040) Speedup broadword bit selection

2014-11-02 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-6040:
-
Attachment: LUCENE-6040.patch

The new select() implementation is so simpler than the previous one that I 
preferred to move it into BitUtil, and delete the BroadWord class completely in 
this patch.
The javadocs for select() refer to this issue.
The tests are also simplified and moved into a new TestBitUtil class.

> Speedup broadword bit selection
> ---
>
> Key: LUCENE-6040
> URL: https://issues.apache.org/jira/browse/LUCENE-6040
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-6040.patch, LUCENE-6040.patch
>
>
> Use table lookup instead of some broadword manipulations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6040) Speedup broadword bit selection

2014-11-02 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193886#comment-14193886
 ] 

Paul Elschot edited comment on LUCENE-6040 at 11/2/14 3:52 PM:
---

The new select() implementation is so much simpler than the previous one that I 
preferred to move it into BitUtil, and delete the BroadWord class completely in 
this patch.
The javadocs for select() refer to this issue.
The tests are also simplified and moved into a new TestBitUtil class.


was (Author: paul.elsc...@xs4all.nl):
The new select() implementation is so simpler than the previous one that I 
preferred to move it into BitUtil, and delete the BroadWord class completely in 
this patch.
The javadocs for select() refer to this issue.
The tests are also simplified and moved into a new TestBitUtil class.

> Speedup broadword bit selection
> ---
>
> Key: LUCENE-6040
> URL: https://issues.apache.org/jira/browse/LUCENE-6040
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-6040.patch, LUCENE-6040.patch
>
>
> Use table lookup instead of some broadword manipulations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6554) Speed up overseer operations for collections with stateFormat > 1

2014-11-02 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6554:

Attachment: SOLR-6554.patch

Here's a better patch which takes care of some of the nocommits:
# The live nodes information is read from the ZkStateReader's cluster state 
because the one in ZkStateWriter's cluster state might be stale
# We force refresh the clusterstate once at the beginning of the loop and then 
only if there's an error in the main loop.
# Added a simple TestClusterStateMutator, I'll add more.

I still have a few ideas I'd like to try.

> Speed up overseer operations for collections with stateFormat > 1
> -
>
> Key: SOLR-6554
> URL: https://issues.apache.org/jira/browse/SOLR-6554
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.0, Trunk
>Reporter: Shalin Shekhar Mangar
> Attachments: SOLR-6554.patch, SOLR-6554.patch
>
>
> Right now (after SOLR-5473 was committed), a node watches a collection only 
> if stateFormat=1 or if that node hosts at least one core belonging to that 
> collection.
> This means that a node which is the overseer operates on all collections but 
> watches only a few. So any read goes directly to zookeeper which slows down 
> overseer operations.
> Let's have the overseer node watch all collections always and never remove 
> those watches (except when the collection itself is deleted).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2197 - Failure

2014-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2197/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([C26B8D465E44942A:438D035E291BF416]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$S

[jira] [Comment Edited] (SOLR-6478) need docs / tests of the "rules" as far as collection names go

2014-11-02 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193817#comment-14193817
 ] 

Anurag Sharma edited comment on SOLR-6478 at 11/2/14 12:45 PM:
---

Unit test covering the allowed and not allowed collection names is attached. 

W3 http://www.w3.org/Addressing/URL/uri-spec.html has a standard for valid 
character set in the URI. In the code currently there are no filters to 
disallow any character. W3 guideline can be used to filter some characters in 
the collection name.

Query params having special characters or whitespaces can be send after 
encoding while making API calls. Here is an example to create "rand chars {£ & 
$ 1234567890-+=`~@\}" collection

{code}
$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATE&name=rand%20chars%20%7B%C2%A3%20%26%20%24%201234567890-%2B%3D%60~%40%7D&numShards=1&collection.configName=myconf&indent=true&wt=json'

{
  "responseHeader":{
"status":0,
"QTime":28509},
  "success":{
"":{
  "responseHeader":{
"status":0,
"QTime":22011},
  "core":"rand chars {£ & $ 1234567890-+=`~@\}_shard1_replica1"}}}

{code}


was (Author: anuragsharma):
Unit test covering the allowed and not allowed collection names is attached. 

W3 http://www.w3.org/Addressing/URL/uri-spec.html has a standard for valid 
character set in the URI. In the code currently there are no filters to 
disallow any character. W3 guideline can be used to filter some characters in 
the collection name.

Query params having special characters or whitespaces can be send after 
encoding while making API calls. Here is an example to create "rand chars {£ & 
$ 1234567890-+=`~@}" collection
{code}
$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATE&name=rand%20chars%20%7B%C2%A3%20%26%20%24%201234567890-%2B%3D%60~%40%7D&numShards=1&collection.configName=myconf&indent=true&wt=json'

{
  "responseHeader":{
"status":0,
"QTime":28509},
  "success":{
"":{
  "responseHeader":{
"status":0,
"QTime":22011},
  "core":"rand chars {£ & $ 1234567890-+=`~@}_shard1_replica1"}}}

{code}

> need docs / tests of the "rules" as far as collection names go
> --
>
> Key: SOLR-6478
> URL: https://issues.apache.org/jira/browse/SOLR-6478
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>  Labels: difficulty-medium, impact-medium
> Attachments: SOLR-6478.patch
>
>
> historically, the rules for "core" names have been vague but implicitly 
> defined based on the rule that it had to be a valid directory path name - but 
> i don't know that we've ever documented anywhere what the rules are for a 
> "collection" name when dealing with the Collections API.
> I haven't had a chance to try this, but i suspect that using the Collections 
> API you can create any collection name you want, and the zk/clusterstate.json 
> data will all be fine, and you'll then be able to request anything you want 
> from that collection as long as you properly URL escape it in your request 
> URLs ... but we should have a test that tries to do this, and document any 
> actual limitations that pop up and/or fix those limitations so we really can 
> have arbitrary collection names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6478) need docs / tests of the "rules" as far as collection names go

2014-11-02 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6478:

Attachment: SOLR-6478.patch

Unit test covering the allowed and not allowed collection names is attached. 

W3 http://www.w3.org/Addressing/URL/uri-spec.html has a standard for valid 
character set in the URI. In the code currently there are no filters to 
disallow any character. W3 guideline can be used to filter some characters in 
the collection name.

Query params having special characters or whitespaces can be send after 
encoding while making API calls. Here is an example to create "rand chars {£ & 
$ 1234567890-+=`~@}" collection
{code}
$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATE&name=rand%20chars%20%7B%C2%A3%20%26%20%24%201234567890-%2B%3D%60~%40%7D&numShards=1&collection.configName=myconf&indent=true&wt=json'

{
  "responseHeader":{
"status":0,
"QTime":28509},
  "success":{
"":{
  "responseHeader":{
"status":0,
"QTime":22011},
  "core":"rand chars {£ & $ 1234567890-+=`~@}_shard1_replica1"}}}

{code}

> need docs / tests of the "rules" as far as collection names go
> --
>
> Key: SOLR-6478
> URL: https://issues.apache.org/jira/browse/SOLR-6478
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>  Labels: difficulty-medium, impact-medium
> Attachments: SOLR-6478.patch
>
>
> historically, the rules for "core" names have been vague but implicitly 
> defined based on the rule that it had to be a valid directory path name - but 
> i don't know that we've ever documented anywhere what the rules are for a 
> "collection" name when dealing with the Collections API.
> I haven't had a chance to try this, but i suspect that using the Collections 
> API you can create any collection name you want, and the zk/clusterstate.json 
> data will all be fine, and you'll then be able to request anything you want 
> from that collection as long as you properly URL escape it in your request 
> URLs ... but we should have a test that tries to do this, and document any 
> actual limitations that pop up and/or fix those limitations so we really can 
> have arbitrary collection names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1875 - Still Failing!

2014-11-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1875/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC (asserts: true)

1 tests failed.
FAILED:  
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch

Error Message:
Give up waiting for no results: 
q=id%3A999&rows=0&_trace=did_it_expire_yet&_stateVer_=collection1%3A15 
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Give up waiting for no results: 
q=id%3A999&rows=0&_trace=did_it_expire_yet&_stateVer_=collection1%3A15 
expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([E83088324E2CED55:69D6062A39738D69]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.waitForNoResults(DistribDocExpirationUpdateProcessorTest.java:189)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:95)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure

[jira] [Created] (SOLR-6689) Ability to partially update multiple documents with a query

2014-11-02 Thread Siddharth Gargate (JIRA)
Siddharth Gargate created SOLR-6689:
---

 Summary: Ability to partially update multiple documents with a 
query
 Key: SOLR-6689
 URL: https://issues.apache.org/jira/browse/SOLR-6689
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.10.2
Reporter: Siddharth Gargate


SOLR allows us to update parts in document, but it is limited to single 
document specified by the ID. We should be able to update multiple documents 
with a specified query.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6037) PendingTerm cannot be cast to PendingBlock

2014-11-02 Thread zhanlijun (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193740#comment-14193740
 ] 

zhanlijun edited comment on LUCENE-6037 at 11/2/14 11:29 AM:
-

   Lucene-spatial module changes are unrelated to the bug, because the bug 
also happens when I use the native lucene-spatial module. 
   lucene-spatial module is widely used in mobile internet applications of 
china. I have an application scenario is to calculate the distance between the 
user and all POIs in the city. However, when the number of POIs in one city 
more than 10, the distance calculation of lucene becomes very slow (more 
than 10ms). Lucene use spatial4j HaversineRAD to calculate the distance, and I 
have do a test on my computer (2.9GHz Intel Core i7, 8GB mem)
POI num |  time
5   |  7ms
10  |  14ms
100 |  144ms
  I did some simplified the distance calculation formula. This 
simplification greatly improve the computational efficiency under the premise 
of maintaining the use of precision. Here is the result of the test.
test point pair   | disSimplify(meter)  
|  distHaversineRAD(meter)|  diff(meter)
(39.941, 116.45) (39.94, 116.451)| 140.024276920| 
140.02851671981400  |  0.0
(39.96 116.45) (39.94, 116.40)   | 4804.113098854450| 4804.421153907680 
  |  0.3
(39.96, 116.45) (39.94, 117.30)  | 72438.90919479560| 72444.54071519510 
  |  5.6
(39.26, 115.25) (41.04, 117.30)  | 263516.676171262 | 263508.55921886700
  |  8.1

POI num |  time
5   | 0.1
10  | 0.3
100   | 4


was (Author: zhanlijun):
   Lucene-spatial module changes are unrelated to the bug, because the bug 
also happens when I use the native lucene-spatial module. 
   lucene-spatial module is widely used in mobile internet applications of 
china. I have an application scenario is to calculate the distance between the 
user and all POIs in the city. However, when the number of POIs in one city 
more than 10, the distance calculation of lucene becomes very slow (more 
than 10ms). Lucene use spatial4j HaversineRAD to calculate the distance, and I 
have do a test on my computer (2.9GHz Intel Core i7, 8GB mem)
POI num |  time
5w  |  7ms
10w |  14ms
100w|  144ms
  I did some simplified the distance calculation formula. This 
simplification greatly improve the computational efficiency under the premise 
of maintaining the use of precision. Here is the result of the test.
test point pair   | disSimplify(meter)  
|  distHaversineRAD(meter)|  diff(meter)
(39.941, 116.45)(39.94, 116.451) | 140.024276920| 
140.02851671981400  |  0.0
(39.96 116.45)(39.94, 116.40)| 4804.113098854450| 4804.421153907680 
  |  0.3
(39.96, 116.45)(39.94, 117.30)   | 72438.90919479560| 72444.54071519510 
  |  5.6
(39.26, 115.25)(41.04, 117.30)   | 263516.676171262 | 263508.55921886700
  |  8.1

POI num |  time
5w  | 0.1
10w | 0.3
100w| 4

> PendingTerm cannot be cast to PendingBlock
> --
>
> Key: LUCENE-6037
> URL: https://issues.apache.org/jira/browse/LUCENE-6037
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 4.3.1
> Environment: ubuntu 64bit
>Reporter: zhanlijun
>Priority: Critical
> Fix For: 4.3.1
>
>
> the error as follows:
> java.lang.ClassCastException: 
> org.apache.lucene.codecs.BlockTreeTermsWriter$PendingTerm cannot be cast to 
> org.apache.lucene.codecs.BlockTreeTermsWriter$PendingBlock
> at 
> org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finish(BlockTreeTermsWriter.java:1014)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:553)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:480)
> at 
> org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:378)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:413)
> at 
> org.apache.lucene.index.IndexWriter.u

Re: [VOTE] Release 4.10.2 RC0

2014-11-02 Thread Michael McCandless
Can you start a new thread with this question?  Thanks.


Mike McCandless

http://blog.mikemccandless.com

On Sun, Nov 2, 2014 at 12:19 AM, Anurag Sharma  wrote:

> Hi,
>
> Not sure this is the right thread to ask this question. Suggest me if I
> need to open another thread.
>
> I am running smokeTestRelease.py first time on my local machine in the
> context of https://issues.apache.org/jira/browse/SOLR-6474 and
> understanding how the smoke test can be launched using the script.
>
> First I was running it using Python-27 and faced SyntaxError issues and
> got rid of them when tried with Python 3.4.2.
>
> Now getting error when am trying to run smoke using command below:
> python -u smokeTestRelease.py
> http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC1-rev1634293
>
> Java 1.7 JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
> Traceback (most recent call last):
>   File "smokeTestRelease.py", line 1522, in 
> main()
>   File "smokeTestRelease.py", line 1465, in main
> c = parse_config()
>   File "smokeTestRelease.py", line 1351, in parse_config
> c.java = make_java_config(parser, c.test_java8)
>   File "smokeTestRelease.py", line 1303, in make_java_config
> run_java7 = _make_runner(java7_home, '1.7')
>   File "smokeTestRelease.py", line 1294, in _make_runner
> shell=True, stderr=subprocess.STDOUT).decode('utf-8')
>   File "C:\Program Files (x86)\Python34\lib\subprocess.py", line 620, in
> check_output
> raise CalledProcessError(retcode, process.args, output=output)
> subprocess.CalledProcessError: Command 'export JAVA_HOME="C:\Program
> Files\Java\jdk1.7.0_51" PATH="C:\Program Files\Java\jdk1.7.0_51/bin:$PATH"
> JAVACMD="C:\Program Files\Java\jdk1.7.0_51/bin/java"; java -version'
> returned non-zero exit status 1
>
> The only usage example I find in the code is it takes a URL param and it's
> giving the above error:
> Example usage:
> python3.2 -u dev-tools/scripts/smokeTestRelease.py
> http://people.apache.org/~whoever/staging_area/lucene-solr-4.3.0-RC1-rev1469340
>
> Please suggest if I am missing anything (path/env setting) while running
> through URL param in the above fashion. Also, is there a way I can run the
> smoke locally without giving URL params.
>
> Thanks
> Anurag
>
> On Sun, Oct 26, 2014 at 3:02 PM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> This constant is gone as of 5.x (LUCENE-5900): good riddance ;)
>>
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Sun, Oct 26, 2014 at 5:30 AM, Uwe Schindler  wrote:
>>
>>> I think we should do this also in 5.x and trunk. I am not sure if the
>>> problem exists there, too, but that would make it easier.
>>>
>>>
>>>
>>> There is only one reason why you _*could*_ make those versions
>>> explicit: If you want to prevent users mixing the test framework with an
>>> incorrect version of Lucene (e.g. use test-franework version 4.10.0 with
>>> Lucene 4.10.2). But we have no check for this, so this is theoretical…
>>>
>>>
>>>
>>> Uwe
>>>
>>>
>>>
>>> -
>>>
>>> Uwe Schindler
>>>
>>> H.-H.-Meier-Allee 63, D-28213 Bremen
>>>
>>> http://www.thetaphi.de
>>>
>>> eMail: u...@thetaphi.de
>>>
>>>
>>>
>>> *From:* Michael McCandless [mailto:luc...@mikemccandless.com]
>>> *Sent:* Sunday, October 26, 2014 10:24 AM
>>> *To:* Lucene/Solr dev
>>>
>>> *Subject:* Re: [VOTE] Release 4.10.2 RC0
>>>
>>>
>>>
>>> +1, I'll just do that.
>>>
>>>
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
>>>
>>> On Sun, Oct 26, 2014 at 5:21 AM, Uwe Schindler  wrote:
>>>
>>> Why not set the constant in LTC to LUCENE_LATEST? It is no longer an
>>> enum, so there is no need to set it explicit. Version.LUCENE_LATEST
>>> explicitely points to latest.
>>>
>>>
>>>
>>> Uwe
>>>
>>>
>>>
>>> -
>>>
>>> Uwe Schindler
>>>
>>> H.-H.-Meier-Allee 63, D-28213 Bremen
>>>
>>> http://www.thetaphi.de
>>>
>>> eMail: u...@thetaphi.de
>>>
>>>
>>>
>>> *From:* Michael McCandless [mailto:luc...@mikemccandless.com]
>>> *Sent:* Sunday, October 26, 2014 9:38 AM
>>> *To:* Lucene/Solr dev; Simon Willnauer
>>> *Subject:* Re: [VOTE] Release 4.10.2 RC0
>>>
>>>
>>>
>>> Argh.  Why does no Lucene test fail ...
>>>
>>>
>>>
>>> I'll add a failing test, fix the constant, respin.
>>>
>>>
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
>>>
>>> On Sun, Oct 26, 2014 at 4:32 AM, Simon Willnauer <
>>> simon.willna...@gmail.com> wrote:
>>>
>>> I think we need to respin here - I upgraded ES and I got some failures
>>> since the
>>>
>>>
>>>
>>> LuceneTestCase#TEST_VERSION_CURRENT is still 4.10.1
>>>
>>>
>>>
>>> see:
>>>
>>>
>>>
>>>
>>> http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_10/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java
>>>
>>>
>>>
>>> i am not sure if this has impact on anything but it's definitely wrong
>>> no? I think there should be a test in lucene that checks if this version
>>> points to `Versoin.LATEST`?
>>>
>>>
>>>
>>> simon
>>>
>>>
>>>
>>> On Sat

RE: One more problem with the new layout

2014-11-02 Thread Uwe Schindler
Hi,

I think this problem is caused by relicts from the previous checkout layout. 
Because the solr/example/ is no longer under IVY's control and some directories 
that were previously handled by svn:exclude are now obsolete. This happens if 
you svn upped a previous checkout but not completely cleaned it. You have 2 
possibilities:

- Use a fresh checkout
- Run one time "ant clean clean-jars" from root folder and afterwards remove 
all files reported by "svn status" as "unversioned" (Tortoise SVN has a task to 
do this, otherwise do some sed/xargs/... stuff). After that you have a clean 
checkout

If I do this, all works correct, except one problem I have (which existed 
before the cleanup, too)...

Here is the problem: After running "ant run-example":

 [java] 2230 [main] ERROR org.apache.solr.servlet.SolrDispatchFilter  û 
Could not start Solr. Check solr/home property and the logs
 [java] 2260 [main] ERROR org.apache.solr.core.SolrCore  û 
null:org.apache.solr.common.SolrException: solr.xml does not exist in 
C:\Users\Uwe Schindler\Projects\lucene\branch_5x-1\solr\example\solr\solr.xml 
cannot start Solr

The folder does not exist here! I have not tried out trunk.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Sunday, November 02, 2014 12:27 AM
> To: dev@lucene.apache.org
> Subject: One more problem with the new layout
> 
> After building the code, switching into example and starting it up, precommit
> fails with complaints about solr/example/lib being not under source code
> control.
> 
> Of course, how much of this is pilot error I'm not quite sure
> 
> Erick
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
> commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6517) CollectionsAPI call REBALANCELEADERS

2014-11-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193752#comment-14193752
 ] 

Noble Paul commented on SOLR-6517:
--

bq.So REBALANCELEADERS will cause any shard where the current leader is not the 
preferredLeader to re-elect leadership.

As I see in the code code it just sends a message to overseer to change the 
leader of the shard. 

> CollectionsAPI call REBALANCELEADERS
> 
>
> Key: SOLR-6517
> URL: https://issues.apache.org/jira/browse/SOLR-6517
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.0, Trunk
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6517.patch, SOLR-6517.patch, SOLR-6517.patch
>
>
> Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are 
> assigned, there has to be a command "make it so Mr. Solr". This is something 
> of a placeholder to collect ideas. One wouldn't want to flood the system with 
> hundreds of re-assignments at once. Should this be synchronous or asnych? 
> Should it make the best attempt but not worry about perfection? Should it???
> a collection=name parameter would be required and it would re-elect all the 
> leaders that were on the 'wrong' node
> I'm thinking an optionally allowing one to specify a shard in the case where 
> you wanted to make a very specific change. Note that there's no need to 
> specify a particular replica, since there should be only a single 
> preferredLeader per slice.
> This command would do nothing to any slice that did not have a replica with a 
> preferredLeader role. Likewise it would do nothing if the slice in question 
> already had the leader role assigned to the node with the preferredLeader 
> role.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/ibm-j9-jdk7) - Build # 11393 - Failure!

2014-11-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11393/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}
 (asserts: false)

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([440C9033D912998E:C5EA1E2BAE4DF9B2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.

[jira] [Commented] (LUCENE-6037) PendingTerm cannot be cast to PendingBlock

2014-11-02 Thread zhanlijun (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193742#comment-14193742
 ] 

zhanlijun commented on LUCENE-6037:
---

org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finish(BlockTreeTermsWriter.java:1014).
 The code is 
 "final PendingBlock root = (PendingBlock) pending.get(0);"

please tell me in which cases pending.get (0) is PendingTerm type?

> PendingTerm cannot be cast to PendingBlock
> --
>
> Key: LUCENE-6037
> URL: https://issues.apache.org/jira/browse/LUCENE-6037
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 4.3.1
> Environment: ubuntu 64bit
>Reporter: zhanlijun
>Priority: Critical
> Fix For: 4.3.1
>
>
> the error as follows:
> java.lang.ClassCastException: 
> org.apache.lucene.codecs.BlockTreeTermsWriter$PendingTerm cannot be cast to 
> org.apache.lucene.codecs.BlockTreeTermsWriter$PendingBlock
> at 
> org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finish(BlockTreeTermsWriter.java:1014)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:553)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:480)
> at 
> org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:378)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:413)
> at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1283)
> at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1243)
> at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1228)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org