[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 10699 - Still Failing!

2014-07-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10699/
Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

All tests passed

Build Log:
[...truncated 29154 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] CHECKSUM FAILED for 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: "1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar"
 was: "1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0")
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   => 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.39s.), 3 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:70: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 59 minutes 7 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5692) Deprecate spatial DisjointSpatialFilter

2014-07-01 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049652#comment-14049652
 ] 

Ryan McKinley commented on LUCENE-5692:
---

+1 to remove DisjointSpatialFilter

using another boolean field to represent if a shape exists seems very easy and 
this would avoid using the FieldCache


> Deprecate spatial DisjointSpatialFilter
> ---
>
> Key: LUCENE-5692
> URL: https://issues.apache.org/jira/browse/LUCENE-5692
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial
>Affects Versions: 5.0
>Reporter: David Smiley
>
> The spatial predicate "IsDisjointTo" is almost the same as the inverse of 
> "Intersects", except that it shouldn't match documents without spatial data.  
> In another sense it's as if the query shape were inverted.
> DisjointSpatialFilter is a utility filter that works (or worked, rather) by 
> using the FieldCache to see which documents have spatial data 
> (getDocsWithField()). Calculating that was probably very slow but it was at 
> least cacheable. Since LUCENE-5666 (v5/trunk only), Rob replaced this to use 
> DocValues.  However for some SpatialStrategies (PrefixTree based) it wouldn't 
> make any sense to use DocValues *just* so that at search time you could call 
> getDocsWithField() when there's no other need for the un-inversion (e.g. no 
> need to lookup terms by document).
> Perhaps an immediate fix is simply to revert the change made to 
> DisjointSpatialFilter so that it uses the FieldCache again, if that works 
> (though it's not public?).  But stepping back a bit, this 
> DisjointSpatialFilter is really something unfortunate that doesn't work as 
> well as it could because it's not at the level of Solr or ES -- that is, 
> there's no access to a filter-cache.  So I propose I simply remove it, and if 
> a user wants to do this for real, they should index a boolean field marking 
> wether there's spatial data and then combine that with a NOT and Intersects, 
> in a straight-forward way.  
> Alternatively, some sort of inverting query shape could be developed, 
> although it wouldn't work with the SpatialPrefixTree technique because there 
> is no edge distinction -- the edge matches normally and notwithstanding 
> changes to RPT algorithms it would also match the edge of an inverted shape.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5793) Add equals/hashCode to FieldType

2014-07-01 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049647#comment-14049647
 ] 

Adrien Grand commented on LUCENE-5793:
--

+1

> Add equals/hashCode to FieldType
> 
>
> Key: LUCENE-5793
> URL: https://issues.apache.org/jira/browse/LUCENE-5793
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shay Banon
> Attachments: LUCENE-5793.patch
>
>
> would be nice to have equals and hashCode to FieldType, so one can easily 
> check if they are the same, and for example, reuse existing default 
> implementations of it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1682 - Still Failing!

2014-07-01 Thread david.w.smi...@gmail.com
On Tue, Jul 1, 2014 at 6:54 PM, Robert Muir  wrote:
> FieldCache is historically lenient, it allows all kinds of nonsense,
> such as uninverting a multi-valued field as single-valued (e.g. leaves
> gaps in ordinals and other bullshit that will cause this assertion to
> fail).
>
> I can fix fieldcache to be strict (since everything else in the
> codebase is now well-behaved), so you get a better exception message
> that what the spatial module is doing is wrong?

If the FieldCache/UninvertingReader is so lenient, then perhaps
TestUtil.checkReader should never try to validate it?

~ David

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5802) SpatialStrategy DocValues & FieldType customizability

2014-07-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049633#comment-14049633
 ] 

David Smiley commented on LUCENE-5802:
--

The solution may be providing a FieldType in constructors to applicable 
Strategies.  There might be a utility class to facilitate generating queries 
(e.g. one that switches on the type borrowing similar code from Solr's 
TrieField) and other stuff.

> SpatialStrategy DocValues & FieldType customizability
> -
>
> Key: LUCENE-5802
> URL: https://issues.apache.org/jira/browse/LUCENE-5802
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: David Smiley
>
> The SpatialStrategy API is a simple facade to index spatial data and query by 
> it in a consistent way, hiding the implementation.  For indexing data, it has 
> one applicable method:
> {code:java}
> public abstract Field[] createIndexableFields(Shape shape);
> {code}
> The base abstraction provides no further configuration. BBoxStrategy and 
> PointVectorStrategy have a way to set the precisionStep of the underlying 
> Double trie fields.  But none have a way to use Floats, and none have a way 
> to specify the use of DocValues (and which type).  Perhaps there are other 
> plausible nobs to turn.  It is actually more than just indexing since at 
> search time it may have to change accordingly (e.g. search difference between 
> Float & Double). PrefixTreeStrategy is likely to soon deprecate/remove any 
> applicability here (see LUCENE-5692).
> If there is no change that could reasonably be made to SpatialStrategy 
> itself, what is the pattern that BBoxStrategy and others should use?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1682 - Still Failing!

2014-07-01 Thread Robert Muir
The conversation is over. You are trying to dodge what I have to say.
I'm not going to assist you any further. Please read what I already
said.

On Tue, Jul 1, 2014 at 10:49 PM, david.w.smi...@gmail.com
 wrote:
> Lets discuss on the issue:
> https://issues.apache.org/jira/browse/LUCENE-5713?focusedCommentId=14049562&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14049562
>
> On Tue, Jul 1, 2014 at 6:54 PM, Robert Muir  wrote:
>>
>> most likely the cause is the spatial module itself?
>>
>> Every other part of lucene uses docvalues, but this one still relies
>> on fieldcache (i didnt change it, the apis for adding things are
>> somewhat convoluted).
>>
>> FieldCache is historically lenient, it allows all kinds of nonsense,
>> such as uninverting a multi-valued field as single-valued (e.g. leaves
>> gaps in ordinals and other bullshit that will cause this assertion to
>> fail).
>>
>> I can fix fieldcache to be strict (since everything else in the
>> codebase is now well-behaved), so you get a better exception message
>> that what the spatial module is doing is wrong?
>>
>> On Tue, Jul 1, 2014 at 6:13 PM, david.w.smi...@gmail.com
>>  wrote:
>> > Another case of:
>> > https://issues.apache.org/jira/browse/LUCENE-5713
>> > (cause unknown)
>> >
>> > ~ David Smiley
>> > Freelance Apache Lucene/Solr Search Consultant/Developer
>> > http://www.linkedin.com/in/davidwsmiley
>> >
>> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5713) FieldCache related test failure

2014-07-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049631#comment-14049631
 ] 

Robert Muir commented on LUCENE-5713:
-

Please don't try to move the discussion from the list here, only to make the 
same mistake again.

The bug is the spatial module. I explained exactly why in the previous message. 
I'm not going to repeat it again.

> FieldCache related test failure
> ---
>
> Key: LUCENE-5713
> URL: https://issues.apache.org/jira/browse/LUCENE-5713
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: David Smiley
>
> The following reproduces for me and [~varunshenoy] on trunk:
> lucene/spatial %>ant test  -Dtestcase=SpatialOpRecursivePrefixTreeTest 
> -Dtests.method=testContains -Dtests.seed=3AD27D1EB168088A
> {noformat}
> [junit4]   1> Strategy: 
> RecursivePrefixTreeStrategy(SPG:(GeohashPrefixTree(maxLevels:2,ctx:SpatialContext.GEO)))
>[junit4]   1> CheckReader failed
>[junit4]   1> test: field norms.OK [0 fields]
>[junit4]   1> test: terms, freq, prox...OK [207 terms; 208 terms/docs 
> pairs; 0 tokens]
>[junit4]   1> test: stored fields...OK [8 total field count; avg 2 
> fields per doc]
>[junit4]   1> test: term vectorsOK [0 total vector count; avg 
> 0 term/freq vector fields per doc]
>[junit4]   1> test: docvalues...ERROR [dv for field: 
> SpatialOpRecursivePrefixTreeTest has -1 ord but is not marked missing for 
> doc: 0]
>[junit4]   1> java.lang.RuntimeException: dv for field: 
> SpatialOpRecursivePrefixTreeTest has -1 ord but is not marked missing for 
> doc: 0
>[junit4]   1>  at 
> org.apache.lucene.index.CheckIndex.checkSortedDocValues(CheckIndex.java:1414)
>[junit4]   1>  at 
> org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1536)
>[junit4]   1>  at 
> org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1367)
>[junit4]   1>  at 
> org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:229)
>[junit4]   1>  at 
> org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:216)
>[junit4]   1>  at 
> org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1597)
> {noformat}
> A 1-in-500 random condition hit to check the index on newSearcher, hitting 
> this.  DocValues used to not be enabled for this spatial test but [~rcmuir] 
> added it recently as part of the move to the DocValues API in lieu of the 
> FieldCache API, and because the DisjointSpatialFilter uses getDocsWithField 
> (though nothing else).  That probably doesn't have anything to do with 
> whatever the problem here is, though.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6221) Xpath predicates is not working. I want to take element value from an xml tag by comparing attr in the way of /bookstore/book[@lang='en'].But solr didnt read the value fr

2014-07-01 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-6221.
--

Resolution: Invalid

Please bring up issues like this on the user's list first rather than raise a 
JIRA to see if it others think it's really a bug or just needs some 
clarification on your part. We try to keep JIRAs for known bugs/improvements.

If this consensus on the user's list is that it's a bug, we can re-open the 
issue.

> Xpath predicates is not working. I want to take element value from an xml tag 
> by comparing attr in the way of /bookstore/book[@lang='en'].But solr didnt 
> read the value from the tag. Help me.   
> -
>
> Key: SOLR-6221
> URL: https://issues.apache.org/jira/browse/SOLR-6221
> Project: Solr
>  Issue Type: Bug
>Reporter: Balaji
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5802) SpatialStrategy DocValues & FieldType customizability

2014-07-01 Thread David Smiley (JIRA)
David Smiley created LUCENE-5802:


 Summary: SpatialStrategy DocValues & FieldType customizability
 Key: LUCENE-5802
 URL: https://issues.apache.org/jira/browse/LUCENE-5802
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley


The SpatialStrategy API is a simple facade to index spatial data and query by 
it in a consistent way, hiding the implementation.  For indexing data, it has 
one applicable method:
{code:java}
public abstract Field[] createIndexableFields(Shape shape);
{code}
The base abstraction provides no further configuration. BBoxStrategy and 
PointVectorStrategy have a way to set the precisionStep of the underlying 
Double trie fields.  But none have a way to use Floats, and none have a way to 
specify the use of DocValues (and which type).  Perhaps there are other 
plausible nobs to turn.  It is actually more than just indexing since at search 
time it may have to change accordingly (e.g. search difference between Float & 
Double). PrefixTreeStrategy is likely to soon deprecate/remove any 
applicability here (see LUCENE-5692).

If there is no change that could reasonably be made to SpatialStrategy itself, 
what is the pattern that BBoxStrategy and others should use?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6221) Xpath predicates is not working. I want to take element value from an xml tag by comparing attr in the way of /bookstore/book[@lang='en'].But solr didnt read the value fro

2014-07-01 Thread Balaji (JIRA)
Balaji created SOLR-6221:


 Summary: Xpath predicates is not working. I want to take element 
value from an xml tag by comparing attr in the way of 
/bookstore/book[@lang='en'].But solr didnt read the value from the tag. Help 
me.   
 Key: SOLR-6221
 URL: https://issues.apache.org/jira/browse/SOLR-6221
 Project: Solr
  Issue Type: Bug
Reporter: Balaji






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6216) Better faceting for multiple intervals on DV fields

2014-07-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6216:


Attachment: SOLR-6216.patch

Added some more unit tests

> Better faceting for multiple intervals on DV fields
> ---
>
> Key: SOLR-6216
> URL: https://issues.apache.org/jira/browse/SOLR-6216
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, 
> SOLR-6216.patch
>
>
> There are two ways to have faceting on values ranges in Solr right now: 
> “Range Faceting” and “Query Faceting” (doing range queries). They both end up 
> doing something similar:
> {code:java}
> searcher.numDocs(rangeQ , docs)
> {code}
> The good thing about this implementation is that it can benefit from caching. 
> The bad thing is that it may be slow with cold caches, and that there will be 
> a query for each of the ranges.
> A different implementation would be one that works similar to regular field 
> faceting, using doc values and validating ranges for each value of the 
> matching documents. This implementation would sometimes be faster than Range 
> Faceting / Query Faceting, specially on cases where caches are not very 
> effective, like on a high update rate, or where ranges change frequently.
> Functionally, the result should be exactly the same as the one obtained by 
> doing a facet query for every interval



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4905) Cross core joins don't work for SolrCloud collections and/or aliases

2014-07-01 Thread Jack Lo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049570#comment-14049570
 ] 

Jack Lo edited comment on SOLR-4905 at 7/2/14 3:21 AM:
---

I have noticed this issue have been lying around for a year, seems like nobody 
is bother to use JOIN in solrcloud, so I decide to tackle this myself.

A small patch has been uploaded here to allow fromIndex to specify a collection 
under cloud environment. Currently, it works if the fromIndex collection is a 
single shard having at least 1 replica on each node. I am planning to support 
multi-shard collection but not really sure how to do it given I am not that 
familiar with the internal mechanics of solr.

Even if we support multishard, given the current implementation of JoinQParser, 
I think we can only support it when a collection with at least 1 replica of all 
shards physically residing on every node. We need all local IndexSearcher. If 
we need full solrcloud join support, I think we need to revamp JoinQParser or 
make something on a higher level to gather terms collection from remote shards 
on StandardRequestHandler.

By the way, I noticed we haven't use JoinUtil in LUCENE, is there a reason to 
not use it, their implementation seems to be more cleaner than the one in SOLR 
right now, I have no idea how JoinQParser works, especally the getdocset stage.


was (Author: jacklo):
I have noticed this issue have been lying around for a year, seems like nobody 
is bother to use JOIN in solrcloud, so I decide to tackle this myself.

A small patch has been uploaded here to allow fromIndex to specify a collection 
under cloud environment. Currently, it works if the fromIndex collection is a 
single shard having at least 1 replica on each node. I am planning to support 
multi-shard collection but not really sure how to do it given I am not that 
familiar with the internal mechanics of solr.

Even if we support multishard, given the current implementation of JoinQParser, 
I think we can only support it when a collection with at least 1 replica of all 
shards physically residing on every node. We need all local IndexSearcher. If 
we need full solrcloud join support, I think we need to revamp JoinQParser or 
make something on a higher level to gather terms collection from remote shards 
on StandardRequestHandler.

By the way, I noticed we haven't use JoinUtil in LUCENE, is there a reason to 
not use it, their implementation seems to be more cleaner than the one in SOLR 
right now, I have no idea what it's doing in JoinQParser

> Cross core joins don't work for SolrCloud collections and/or aliases
> 
>
> Key: SOLR-4905
> URL: https://issues.apache.org/jira/browse/SOLR-4905
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Philip K. Warren
> Attachments: patch.txt
>
>
> Using a non-SolrCloud setup, it is possible to perform cross core joins 
> (http://wiki.apache.org/solr/Join). When testing with SolrCloud, however, 
> neither the collection name, alias name (we have created aliases to SolrCloud 
> collections), or the automatically generated core name (i.e. 
> _shard1_replica1) work as the fromIndex parameter for a 
> cross-core join.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4905) Cross core joins don't work for SolrCloud collections and/or aliases

2014-07-01 Thread Jack Lo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049570#comment-14049570
 ] 

Jack Lo edited comment on SOLR-4905 at 7/2/14 3:19 AM:
---

I have noticed this issue have been lying around for a year, seems like nobody 
is bother to use JOIN in solrcloud, so I decide to tackle this myself.

A small patch has been uploaded here to allow fromIndex to specify a collection 
under cloud environment. Currently, it works if the fromIndex collection is a 
single shard having at least 1 replica on each node. I am planning to support 
multi-shard collection but not really sure how to do it given I am not that 
familiar with the internal mechanics of solr.

Even if we support multishard, given the current implementation of JoinQParser, 
I think we can only support it when a collection with at least 1 replica of all 
shards physically residing on every node. We need all local IndexSearcher. If 
we need full solrcloud join support, I think we need to revamp JoinQParser or 
make something on a higher level to gather terms collection from remote shards 
on StandardRequestHandler.

By the way, I noticed we haven't use JoinUtil in LUCENE, is there a reason to 
not use it, their implementation seems to be more cleaner than the one in SOLR 
right now, I have no idea what it's doing in JoinQParser


was (Author: jacklo):
Partially make solrcloud join to work

> Cross core joins don't work for SolrCloud collections and/or aliases
> 
>
> Key: SOLR-4905
> URL: https://issues.apache.org/jira/browse/SOLR-4905
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Philip K. Warren
> Attachments: patch.txt
>
>
> Using a non-SolrCloud setup, it is possible to perform cross core joins 
> (http://wiki.apache.org/solr/Join). When testing with SolrCloud, however, 
> neither the collection name, alias name (we have created aliases to SolrCloud 
> collections), or the automatically generated core name (i.e. 
> _shard1_replica1) work as the fromIndex parameter for a 
> cross-core join.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4905) Cross core joins don't work for SolrCloud collections and/or aliases

2014-07-01 Thread Jack Lo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Lo updated SOLR-4905:
--

Attachment: patch.txt

Partially make solrcloud join to work

> Cross core joins don't work for SolrCloud collections and/or aliases
> 
>
> Key: SOLR-4905
> URL: https://issues.apache.org/jira/browse/SOLR-4905
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Philip K. Warren
> Attachments: patch.txt
>
>
> Using a non-SolrCloud setup, it is possible to perform cross core joins 
> (http://wiki.apache.org/solr/Join). When testing with SolrCloud, however, 
> neither the collection name, alias name (we have created aliases to SolrCloud 
> collections), or the automatically generated core name (i.e. 
> _shard1_replica1) work as the fromIndex parameter for a 
> cross-core join.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1682 - Still Failing!

2014-07-01 Thread david.w.smi...@gmail.com
Lets discuss on the issue:
https://issues.apache.org/jira/browse/LUCENE-5713?focusedCommentId=14049562&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14049562

On Tue, Jul 1, 2014 at 6:54 PM, Robert Muir  wrote:
>
> most likely the cause is the spatial module itself?
>
> Every other part of lucene uses docvalues, but this one still relies
> on fieldcache (i didnt change it, the apis for adding things are
> somewhat convoluted).
>
> FieldCache is historically lenient, it allows all kinds of nonsense,
> such as uninverting a multi-valued field as single-valued (e.g. leaves
> gaps in ordinals and other bullshit that will cause this assertion to
> fail).
>
> I can fix fieldcache to be strict (since everything else in the
> codebase is now well-behaved), so you get a better exception message
> that what the spatial module is doing is wrong?
>
> On Tue, Jul 1, 2014 at 6:13 PM, david.w.smi...@gmail.com
>  wrote:
> > Another case of:
> > https://issues.apache.org/jira/browse/LUCENE-5713
> > (cause unknown)
> >
> > ~ David Smiley
> > Freelance Apache Lucene/Solr Search Consultant/Developer
> > http://www.linkedin.com/in/davidwsmiley
> >
> >

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5713) FieldCache related test failure

2014-07-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049562#comment-14049562
 ] 

David Smiley commented on LUCENE-5713:
--

[~rcmuir], lets discuss the issue here instead of dev list emails.

FYI the test class has been renamed to "RandomSpatialOpFuzzyPrefixTreeTest" and 
the seed still triggers the error.

The one line in the test that sets up the UninvertingReader is in setUp() 
follows:
{code:java}
//Only for Disjoint.  Ugh; need to find a better way.  LUCENE-5692
uninvertMap.put(getClass().getSimpleName(), UninvertingReader.Type.SORTED);
{code}

This test class tests RecursivePrefixTreeStrategy (RPT), one of something like 
5 SpatialStrategy implementations in the spatial API.  RPT *only* makes 
reference to DocValues (formerly FieldCache) for supporting the "disjoint" 
predicate. However, the particular test failure here is for the 
{{testContains}} method which tests the "contains" predicate which does _not_ 
use DocValues.  It would appear based on this fact that the bug is not in the 
spatial module.  How could it be?

It's out of scope in this issue to discuss why any particular SpatialStrategy 
implementation relies on UninvertingReader. I agree that all SpatialStrategies 
that do should be upgraded -- I'll file an issue for it.  But meanwhile, it 
seems to me there is a  bug in UninvertingReader, even if in a more perfect 
world, the spatial module wouldn't be using it in the first place.

> FieldCache related test failure
> ---
>
> Key: LUCENE-5713
> URL: https://issues.apache.org/jira/browse/LUCENE-5713
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: David Smiley
>
> The following reproduces for me and [~varunshenoy] on trunk:
> lucene/spatial %>ant test  -Dtestcase=SpatialOpRecursivePrefixTreeTest 
> -Dtests.method=testContains -Dtests.seed=3AD27D1EB168088A
> {noformat}
> [junit4]   1> Strategy: 
> RecursivePrefixTreeStrategy(SPG:(GeohashPrefixTree(maxLevels:2,ctx:SpatialContext.GEO)))
>[junit4]   1> CheckReader failed
>[junit4]   1> test: field norms.OK [0 fields]
>[junit4]   1> test: terms, freq, prox...OK [207 terms; 208 terms/docs 
> pairs; 0 tokens]
>[junit4]   1> test: stored fields...OK [8 total field count; avg 2 
> fields per doc]
>[junit4]   1> test: term vectorsOK [0 total vector count; avg 
> 0 term/freq vector fields per doc]
>[junit4]   1> test: docvalues...ERROR [dv for field: 
> SpatialOpRecursivePrefixTreeTest has -1 ord but is not marked missing for 
> doc: 0]
>[junit4]   1> java.lang.RuntimeException: dv for field: 
> SpatialOpRecursivePrefixTreeTest has -1 ord but is not marked missing for 
> doc: 0
>[junit4]   1>  at 
> org.apache.lucene.index.CheckIndex.checkSortedDocValues(CheckIndex.java:1414)
>[junit4]   1>  at 
> org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1536)
>[junit4]   1>  at 
> org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1367)
>[junit4]   1>  at 
> org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:229)
>[junit4]   1>  at 
> org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:216)
>[junit4]   1>  at 
> org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1597)
> {noformat}
> A 1-in-500 random condition hit to check the index on newSearcher, hitting 
> this.  DocValues used to not be enabled for this spatial test but [~rcmuir] 
> added it recently as part of the move to the DocValues API in lieu of the 
> FieldCache API, and because the DisjointSpatialFilter uses getDocsWithField 
> (though nothing else).  That probably doesn't have anything to do with 
> whatever the problem here is, though.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4759 - Still Failing

2014-07-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4759/

All tests passed

Build Log:
[...truncated 29140 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 [licenses] EXPECTED sha1 checksum file : 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/greenmail-1.3.1b.jar.sha1

 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses] EXPECTED sha1 checksum file : 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-1.5.1.jar.sha1
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses] EXPECTED sha1 checksum file : 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-1.5.1.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:467:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:70:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build.xml:254:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 140 minutes 29 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-trunk-Java7 #4753
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 19 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-2245) MailEntityProcessor Update

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049505#comment-14049505
 ] 

ASF subversion and git services commented on SOLR-2245:
---

Commit 1607221 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1607221 ]

SOLR-2245: Add sha1 files for greenmail, gimap, and sun java mail.

> MailEntityProcessor Update
> --
>
> Key: SOLR-2245
> URL: https://issues.apache.org/jira/browse/SOLR-2245
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4, 1.4.1
>Reporter: Peter Sturge
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.patch, 
> SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.zip
>
>
> This patch addresses a number of issues in the MailEntityProcessor 
> contrib-extras module.
> The changes are outlined here:
> * Added an 'includeContent' entity attribute to allow specifying content to 
> be included independently of processing attachments
>  e.g.  
> would include message content, but not attachment content
> * Added a synonym called 'processAttachments', which is synonymous to the 
> mis-spelled (and singular) 'processAttachement' property. This property 
> functions the same as processAttachement. Default= 'true' - if either is 
> false, then attachments are not processed. Note that only one of these should 
> really be specified in a given  tag.
> * Added a FLAGS.NONE value, so that if an email has no flags (i.e. it is 
> unread, not deleted etc.), there is still a property value stored in the 
> 'flags' field (the value is the string "none")
> Note: there is a potential backward compat issue with FLAGS.NONE for clients 
> that expect the absence of the 'flags' field to mean 'Not read'. I'm 
> calculating this would be extremely rare, and is inadviasable in any case as 
> user flags can be arbitrarily set, so fixing it up now will ensure future 
> client access will be consistent.
> * The folder name of an email is now included as a field called 'folder' 
> (e.g. folder=INBOX.Sent). This is quite handy in search/post-indexing 
> processing
> * The addPartToDocument() method that processes attachments is significantly 
> re-written, as there looked to be no real way the existing code would ever 
> actually process attachment content and add it to the row data
> Tested on the 3.x trunk with a number of popular imap servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 2017 - Still Failing

2014-07-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2017/

All tests passed

Build Log:
[...truncated 29270 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/example/lib/ext/log4j-1.2.16.jar
 [licenses] EXPECTED sha1 checksum file : 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/licenses/log4j-1.2.16.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/build.xml:467:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/build.xml:70:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/build.xml:254:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 137 minutes 25 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-4.x-Java7 #2008
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 20 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1682 - Still Failing!

2014-07-01 Thread Robert Muir
most likely the cause is the spatial module itself?

Every other part of lucene uses docvalues, but this one still relies
on fieldcache (i didnt change it, the apis for adding things are
somewhat convoluted).

FieldCache is historically lenient, it allows all kinds of nonsense,
such as uninverting a multi-valued field as single-valued (e.g. leaves
gaps in ordinals and other bullshit that will cause this assertion to
fail).

I can fix fieldcache to be strict (since everything else in the
codebase is now well-behaved), so you get a better exception message
that what the spatial module is doing is wrong?

On Tue, Jul 1, 2014 at 6:13 PM, david.w.smi...@gmail.com
 wrote:
> Another case of:
> https://issues.apache.org/jira/browse/LUCENE-5713
> (cause unknown)
>
> ~ David Smiley
> Freelance Apache Lucene/Solr Search Consultant/Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Tue, Jul 1, 2014 at 6:08 PM, Policeman Jenkins Server
>  wrote:
>>
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1682/
>> Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC
>>
>> 1 tests failed.
>> FAILED:
>> org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testDisjoint
>> {#9 seed=[9DF0DFC458CE610C:BB57C3A241E5FB1A]}
>>
>> Error Message:
>> CheckReader failed
>>
>> Stack Trace:
>> java.lang.RuntimeException: CheckReader failed
>> at
>> __randomizedtesting.SeedInfo.seed([9DF0DFC458CE610C:BB57C3A241E5FB1A]:0)
>> at org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:240)
>> at org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:218)
>> at
>> org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1598)
>> at
>> org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1572)
>> at
>> org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1564)
>> at
>> org.apache.lucene.spatial.SpatialTestCase.commit(SpatialTestCase.java:131)
>> at
>> org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.doTest(RandomSpatialOpFuzzyPrefixTreeTest.java:294)
>> at
>> org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testDisjoint(RandomSpatialOpFuzzyPrefixTreeTest.java:155)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
>> at
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>> at
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>> at
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>> at
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>> at
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>> at
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
>> at
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>> at
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>> at
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>> at
>> com.carrotsearch.rando

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20-ea-b15) - Build # 10697 - Still Failing!

2014-07-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10697/
Java: 32bit/jdk1.8.0_20-ea-b15 -server -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 29123 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 [licenses] EXPECTED sha1 checksum file : 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/greenmail-1.3.1b.jar.sha1

 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses] EXPECTED sha1 checksum file : 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-1.5.1.jar.sha1
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses] EXPECTED sha1 checksum file : 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-1.5.1.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:70: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 87 minutes 29 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.8.0_20-ea-b15 -server -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5755) Explore alternative build systems

2014-07-01 Thread Matt Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049383#comment-14049383
 ] 

Matt Davis commented on LUCENE-5755:


 I have all Lucene test case compiling on my fork with the system out check 
suppressed.   Solr is untouched and javacc.  I am using transitive dependencies 
but that is an easy fix.

I disagree that these features are not features that would useful for gradle to 
implement natively.  The following seem pretty useful in general but I will let 
someone from gradle decide that:

1)We use a special SecurityManager that also prevents to escape the test's 
working dir or to prevent that a tests calls System.halt(). This security 
manager relies on the Junit4-Runner.
2)Also the runner not only parallelizes, it also keeps statistics about the 
performance of tests to reorder them on next run, so slower tests don't leave 
the last JVM run longer just because the running tests take too long
3)Another important thing is that the runner randomizes the test execution 
order, which is important to prevent bugs that are caused by tests leaving 
state that influence others.

Things like forbidden-apis make sense to call from ant tasks but it is easy to 
make a gradle plugin out of them.

As a side note, I think making eclipse projects with gradle is going to be 
problem unless i am missing something.  lucene-core test depends on 
lucene-test-framework and lucene-test-framework depends back on lucene-core.

I will leave this to others that are more knowledgeable about lucene to 
continue.

> Explore alternative build systems
> -
>
> Key: LUCENE-5755
> URL: https://issues.apache.org/jira/browse/LUCENE-5755
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
> It's not even the tool's fault; it seems Lucene builds just hit the borders 
> of what it can do, especially in terms of submodule dependencies etc.
> I don't think Maven will help much too, given certain things I'd like to have 
> in the build (for example collect all tests globally for a single execution 
> phase at the end of the build, to support better load-balancing).
> I'd like to explore Gradle as an alternative. This task is a notepad for 
> thoughts and experiments.
> An example of a complex (?) gradle build is javafx, for example.
> http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1682 - Still Failing!

2014-07-01 Thread david.w.smi...@gmail.com
Another case of:
https://issues.apache.org/jira/browse/LUCENE-5713
(cause unknown)

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley


On Tue, Jul 1, 2014 at 6:08 PM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1682/
> Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC
>
> 1 tests failed.
> FAILED:
>  
> org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testDisjoint
> {#9 seed=[9DF0DFC458CE610C:BB57C3A241E5FB1A]}
>
> Error Message:
> CheckReader failed
>
> Stack Trace:
> java.lang.RuntimeException: CheckReader failed
> at
> __randomizedtesting.SeedInfo.seed([9DF0DFC458CE610C:BB57C3A241E5FB1A]:0)
> at org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:240)
> at org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:218)
> at
> org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1598)
> at
> org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1572)
> at
> org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1564)
> at
> org.apache.lucene.spatial.SpatialTestCase.commit(SpatialTestCase.java:131)
> at
> org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.doTest(RandomSpatialOpFuzzyPrefixTreeTest.java:294)
> at
> org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testDisjoint(RandomSpatialOpFuzzyPrefixTreeTest.java:155)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
> at
> org.apache.lucene.util.TestRuleMarkFail

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1682 - Still Failing!

2014-07-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1682/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testDisjoint
 {#9 seed=[9DF0DFC458CE610C:BB57C3A241E5FB1A]}

Error Message:
CheckReader failed

Stack Trace:
java.lang.RuntimeException: CheckReader failed
at 
__randomizedtesting.SeedInfo.seed([9DF0DFC458CE610C:BB57C3A241E5FB1A]:0)
at org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:240)
at org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:218)
at 
org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1598)
at 
org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1572)
at 
org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1564)
at 
org.apache.lucene.spatial.SpatialTestCase.commit(SpatialTestCase.java:131)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.doTest(RandomSpatialOpFuzzyPrefixTreeTest.java:294)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testDisjoint(RandomSpatialOpFuzzyPrefixTreeTest.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakC

[jira] [Comment Edited] (LUCENE-5755) Explore alternative build systems

2014-07-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049370#comment-14049370
 ] 

Robert Muir edited comment on LUCENE-5755 at 7/1/14 9:59 PM:
-

{quote}
Very important is also to not use transitive dependencies (Lucene prefers to 
declare every source code dependency explicit, the exemption are just build 
tools like Ant tasks loaded from Maven central)
{quote}

Strong +1. 

There is a discussion on the legal list right now "transitive 3rd party 
dependencies", where others ran into such trouble: an apache-licensed 
dependency itself depending on LGPL. The fact is, we just have too many 
dependencies in this build (100+) to manage this transitively. 


was (Author: rcmuir):
{quote}
Very important is also to not use transitive dependencies (Lucene prefers to 
declare every source code dependency explicit, the exemption are just build 
tools like Ant tasks loaded from Maven central)
{quote}

Strong +1. There is a discussion on the legal list right now "transitive 3rd 
party dependencies", where others ran into such trouble: an apache-licensed 
dependency itself depending on LGPL. The fact is, we just have too many 
dependencies in this build (100+) to manage this transitively. 

> Explore alternative build systems
> -
>
> Key: LUCENE-5755
> URL: https://issues.apache.org/jira/browse/LUCENE-5755
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
> It's not even the tool's fault; it seems Lucene builds just hit the borders 
> of what it can do, especially in terms of submodule dependencies etc.
> I don't think Maven will help much too, given certain things I'd like to have 
> in the build (for example collect all tests globally for a single execution 
> phase at the end of the build, to support better load-balancing).
> I'd like to explore Gradle as an alternative. This task is a notepad for 
> thoughts and experiments.
> An example of a complex (?) gradle build is javafx, for example.
> http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5755) Explore alternative build systems

2014-07-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049370#comment-14049370
 ] 

Robert Muir commented on LUCENE-5755:
-

{quote}
Very important is also to not use transitive dependencies (Lucene prefers to 
declare every source code dependency explicit, the exemption are just build 
tools like Ant tasks loaded from Maven central)
{quote}

Strong +1. There is a discussion on the legal list right now "transitive 3rd 
party dependencies", where others ran into such trouble: an apache-licensed 
dependency itself depending on LGPL. The fact is, we just have too many 
dependencies in this build (100+) to manage this transitively. 

> Explore alternative build systems
> -
>
> Key: LUCENE-5755
> URL: https://issues.apache.org/jira/browse/LUCENE-5755
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
> It's not even the tool's fault; it seems Lucene builds just hit the borders 
> of what it can do, especially in terms of submodule dependencies etc.
> I don't think Maven will help much too, given certain things I'd like to have 
> in the build (for example collect all tests globally for a single execution 
> phase at the end of the build, to support better load-balancing).
> I'd like to explore Gradle as an alternative. This task is a notepad for 
> thoughts and experiments.
> An example of a complex (?) gradle build is javafx, for example.
> http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5755) Explore alternative build systems

2014-07-01 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049355#comment-14049355
 ] 

Uwe Schindler edited comment on LUCENE-5755 at 7/1/14 9:42 PM:
---

Hi Matt,

I don't think that the junit4-runner needs to be included into gradle itsself. 
Just see it as a plugin for running tests. It is the same as Elasticsearch uses 
this runner instead of surefire (in their Maven build). And because gradle is 
able to execute Ant tasks without any problems, there is no need to make a 
"gradle" plugin out of it. Just use the gradle scripting to execute the ant 
task. 

In fact Lucene uses many other additional things to make the tests validate 
more: We use a special SecurityManager that also prevents to escape the test's 
working dir or to prevent that a tests calls System.halt(). This security 
manager relies on the Junit4-Runner.

Also the runner not only parallelizes, it also keeps statistics about the 
performance of tests to reorder them on next run, so slower tests don't leave 
the last JVM run longer just because the running tests take too long. Another 
important thing is that the runner randomizes the test execution order, whcih 
is important to prevent bugs that are caused by tests leaving state that 
influence others.

In any case, Lucene's build uses lots of other Ant tasks and Groovy scripts 
while building, e.g. forbidden-apis. Those are not available (and will never be 
as native gradle tasks), because you can invoke them as plain Ant tasks. You 
just have to declare a dependency in gradle and then invoke them.

We also have special tasks to build the documentation, including XSL 
transformations and Markdown processing. This is all implemented in groovy 
("ant pegdown task"), so you can just copypaste the code to gradle (which is 
groovy). This is one reason why I prefer Gradle instead of buildr.

So my wish:
We should use Gradle to do the dependency management, but all tasks/targets and 
functionality that are available in Lucene/Solr's Ant build should be preserved 
in an identical way, using the same external build tools (mainly ant tasks) we 
currently use in Ant. Very important is also to *not* use transitive 
dependencies (Lucene prefers to declare every source code dependency explicit, 
the exemption are just build tools like Ant tasks loaded from Maven central)


was (Author: thetaphi):
Hi Matt,

I don't think that the junit4-runner needs to be included into gradle itsself. 
Just see it as a plugin for running tests. It is the same as Elasticsearch uses 
this runner instead of surefire (in their Maven build). And because gradle is 
able to execute Ant tasks without any problems, there is no need to make a 
"gradle" plugin out of it. Just use the gradle scripting to execute the ant 
task. 

In fact Lucene uses many other additional things to make the tests validate 
more: We use a special SecurityManager that also prevents to escape the test's 
working dir or to prevent that a tests calls System.halt(). This security 
manager relies on the Junit4-Runner.

Also the runner not only parallelizes, it also keeps statistics about the 
performance of tests to reorder them on next run, so slower tests don't leave 
the last JVM run longer just because the running tests take too long. Another 
important thing is that the runner randomizes the test execution order, whcih 
is important to prevent bugs that are caused by tests

In any case, Lucene's build uses lots of other Ant tasks and Groovy scripts 
while building, e.g. forbidden-apis. Those are not available (and will never be 
as native gradle tasks), because you can invoke them as plain Ant tasks. You 
just have to declare a dependency in gradle and then invoke them.

We also have special tasks to build the documentation, including XSL 
transformations and Markdown processing. This is all implemented in groovy 
("ant pegdown task"), so you can just copypaste the code to gradle (which is 
groovy). This is one reason why I prefer Gradle instead of buildr.

So my wish:
We should use Gradle to do the dependency management, but all tasks/targets and 
functionality that are available in Lucene/Solr's Ant build should be preserved 
in an identical way, using the same external build tools (mainly ant tasks) we 
currently use in Ant. Very important is also to *not* use transitive 
dependencies (Lucene prefers to declare every source code dependency explicit, 
the exemption are just build tools like Ant tasks loaded from Maven central)

> Explore alternative build systems
> -
>
> Key: LUCENE-5755
> URL: https://issues.apache.org/jira/browse/LUCENE-5755
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> I am dissatisfied with how ANT a

[jira] [Commented] (LUCENE-5755) Explore alternative build systems

2014-07-01 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049355#comment-14049355
 ] 

Uwe Schindler commented on LUCENE-5755:
---

Hi Matt,

I don't think that the junit4-runner needs to be included into gradle itsself. 
Just see it as a plugin for running tests. It is the same as Elasticsearch uses 
this runner instead of surefire (in their Maven build). And because gradle is 
able to execute Ant tasks without any problems, there is no need to make a 
"gradle" plugin out of it. Just use the gradle scripting to execute the ant 
task. 

In fact Lucene uses many other additional things to make the tests validate 
more: We use a special SecurityManager that also prevents to escape the test's 
working dir or to prevent that a tests calls System.halt(). This security 
manager relies on the Junit4-Runner.

Also the runner not only parallelizes, it also keeps statistics about the 
performance of tests to reorder them on next run, so slower tests don't leave 
the last JVM run longer just because the running tests take too long. Another 
important thing is that the runner randomizes the test execution order, whcih 
is important to prevent bugs that are caused by tests

In any case, Lucene's build uses lots of other Ant tasks and Groovy scripts 
while building, e.g. forbidden-apis. Those are not available (and will never be 
as native gradle tasks), because you can invoke them as plain Ant tasks. You 
just have to declare a dependency in gradle and then invoke them.

We also have special tasks to build the documentation, including XSL 
transformations and Markdown processing. This is all implemented in groovy 
("ant pegdown task"), so you can just copypaste the code to gradle (which is 
groovy). This is one reason why I prefer Gradle instead of buildr.

So my wish:
We should use Gradle to do the dependency management, but all tasks/targets and 
functionality that are available in Lucene/Solr's Ant build should be preserved 
in an identical way, using the same external build tools (mainly ant tasks) we 
currently use in Ant. Very important is also to *not* use transitive 
dependencies (Lucene prefers to declare every source code dependency explicit, 
the exemption are just build tools like Ant tasks loaded from Maven central)

> Explore alternative build systems
> -
>
> Key: LUCENE-5755
> URL: https://issues.apache.org/jira/browse/LUCENE-5755
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
> It's not even the tool's fault; it seems Lucene builds just hit the borders 
> of what it can do, especially in terms of submodule dependencies etc.
> I don't think Maven will help much too, given certain things I'd like to have 
> in the build (for example collect all tests globally for a single execution 
> phase at the end of the build, to support better load-balancing).
> I'd like to explore Gradle as an alternative. This task is a notepad for 
> thoughts and experiments.
> An example of a complex (?) gradle build is javafx, for example.
> http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1159: POMs out of sync

2014-07-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1159/

No tests ran.

Build Log:
[...truncated 30823 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:483: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:164: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/solr/build.xml:582:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:440:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1470:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:537:
 Unable to initialize POM pom.xml: Failed to validate POM for project 
org.apache.solr:solr-dataimporthandler-extras at 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build/poms/solr/contrib/dataimporthandler-extras/pom.xml

Total time: 23 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6216) Better faceting for multiple intervals on DV fields

2014-07-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6216:


Attachment: SOLR-6216.patch

Added a distributed test case. I don't know if it makes sense to have a 
complete different test class for this, but I need to suppress some older 
codecs, and I don't know if there is a way from inside the test (on the 
distributed test cases) to know which codecs are being used. 

> Better faceting for multiple intervals on DV fields
> ---
>
> Key: SOLR-6216
> URL: https://issues.apache.org/jira/browse/SOLR-6216
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch
>
>
> There are two ways to have faceting on values ranges in Solr right now: 
> “Range Faceting” and “Query Faceting” (doing range queries). They both end up 
> doing something similar:
> {code:java}
> searcher.numDocs(rangeQ , docs)
> {code}
> The good thing about this implementation is that it can benefit from caching. 
> The bad thing is that it may be slow with cold caches, and that there will be 
> a query for each of the ranges.
> A different implementation would be one that works similar to regular field 
> faceting, using doc values and validating ranges for each value of the 
> matching documents. This implementation would sometimes be faster than Range 
> Faceting / Query Faceting, specially on cases where caches are not very 
> effective, like on a high update rate, or where ranges change frequently.
> Functionally, the result should be exactly the same as the one obtained by 
> doing a facet query for every interval



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5793) Add equals/hashCode to FieldType

2014-07-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5793:


Attachment: LUCENE-5793.patch

simple patch.

> Add equals/hashCode to FieldType
> 
>
> Key: LUCENE-5793
> URL: https://issues.apache.org/jira/browse/LUCENE-5793
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shay Banon
> Attachments: LUCENE-5793.patch
>
>
> would be nice to have equals and hashCode to FieldType, so one can easily 
> check if they are the same, and for example, reuse existing default 
> implementations of it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5755) Explore alternative build systems

2014-07-01 Thread Matt Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049245#comment-14049245
 ] 

Matt Davis commented on LUCENE-5755:


Understood.  I was not aware of Junit4.  I thought you meant Junit version 4.  
Gradle does support a few things along those the lines of parallelism:

http://www.gradle.org/docs/current/dsl/org.gradle.api.tasks.testing.Test.html
maxParallelForks
The maximum number of forked test processes to execute in parallel. The default 
value is 1 (no parallel test execution).
forkEvery   
The maximum number of test classes to execute in a forked test process. The 
forked test process will be restarted when this limit is reached. The default 
value is 0 (no maximum).

It however doesn't isolate the working directories, doesn't have global 
seeding, or load balancing as far as I know.

Seems to me that it would be good to push these features upstream into gradle 
because they sound useful to other projects but it is outside my ability.

Filtering inner clases worked for me.  Thanks.
https://github.com/mdavis95/lucene-solr/commit/53217bccff2a14efa58951d8b3c0d20635f6



> Explore alternative build systems
> -
>
> Key: LUCENE-5755
> URL: https://issues.apache.org/jira/browse/LUCENE-5755
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
> It's not even the tool's fault; it seems Lucene builds just hit the borders 
> of what it can do, especially in terms of submodule dependencies etc.
> I don't think Maven will help much too, given certain things I'd like to have 
> in the build (for example collect all tests globally for a single execution 
> phase at the end of the build, to support better load-balancing).
> I'd like to explore Gradle as an alternative. This task is a notepad for 
> thoughts and experiments.
> An example of a complex (?) gradle build is javafx, for example.
> http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5768:


Attachment: SOLR-5768.diff

Here's an updated patch which applies to trunk.

The DistributedQueryComponentOptimizationTest doesn't pass with the patch. This 
is the error:
{code}
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.component.QueryComponent.regularFinishStage(QueryComponent.java:779)
at 
org.apache.solr.handler.component.QueryComponent.finishStage(QueryComponent.java:733)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:333)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1980)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:136)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SslConnection.handle(SslConnection.java:196)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:744)

at 
__randomizedtesting.SeedInfo.seed([5353CAD02801E6C4:D2B544C85F5E86F8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:554)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:508)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:556)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:538)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:517)
at 
org.apache.solr.handler.component.DistributedQueryComponentOptimizationTest.doTest(DistributedQueryComponentOptimizationTest.java:83)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:863)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAc

[jira] [Commented] (LUCENE-5755) Explore alternative build systems

2014-07-01 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049180#comment-14049180
 ] 

Dawid Weiss commented on LUCENE-5755:
-

RandomizedRunner and Ant's launcher for randomized testing are two different 
things. The launcher for Ant does more things than just isolate/ fork the 
subprocess JVM. See here for an overview.

http://labs.carrotsearch.com/download/randomizedtesting/2.1.2/docs/junit4-ant/Tasks/junit4.html

{code}
java.lang.RuntimeException: Suite class 
org.apache.lucene.util.TestVirtualMethod$TestClass1 should be public
{code}
You need to filter out nested classes from the pattern; these are not tests.

I'm not saying gradle's test runner cannot be used -- it probably can be -- but 
you'll lose some of the functionality specifically written for Lucene (such as 
load balancing, cwd isolation or master seed propagation).

> Explore alternative build systems
> -
>
> Key: LUCENE-5755
> URL: https://issues.apache.org/jira/browse/LUCENE-5755
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
> It's not even the tool's fault; it seems Lucene builds just hit the borders 
> of what it can do, especially in terms of submodule dependencies etc.
> I don't think Maven will help much too, given certain things I'd like to have 
> in the build (for example collect all tests globally for a single execution 
> phase at the end of the build, to support better load-balancing).
> I'd like to explore Gradle as an alternative. This task is a notepad for 
> thoughts and experiments.
> An example of a complex (?) gradle build is javafx, for example.
> http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4757 - Still Failing

2014-07-01 Thread Dawid Weiss
I think this may be related to the fact that Uwe upgraded openjdk port
(to fix the socket interrupt issue).

I honestly think it's a lot of work to maintain those tests on FreeBSD
-- I'm progressively less confident in what causes a failure -- the
jre/jdk or our code.

Dawid

On Tue, Jul 1, 2014 at 7:51 PM, Chris Hostetter
 wrote:
>
> This "posix_spawn is not a supported process launch mechanism on this
> platform." gem has hit us before, see my previous comment...
>
> https://mail-archives.apache.org/mod_mbox/lucene-dev/201406.mbox/%3Calpine.DEB.2.02.1406171421280.10600@frisbee%3E
>
> ...but as far as i know, this is the only time it's ever happened on a
> non-MacOS setup.
>
> anybody have any clue what's going on here?
>
>
>
>
> : Date: Tue, 1 Jul 2014 13:10:09 + (UTC)
> : From: Apache Jenkins Server 
> : Reply-To: dev@lucene.apache.org
> : To: dev@lucene.apache.org
> : Subject: [JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4757 - Still
> : Failing
> :
> : Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4757/
> :
> : 1 tests failed.
> : FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers
> :
> : Error Message:
> : posix_spawn is not a supported process launch mechanism on this platform.
> :
> : Stack Trace:
> : java.lang.Error: posix_spawn is not a supported process launch mechanism on 
> this platform.
> :   at 
> __randomizedtesting.SeedInfo.seed([AC10FFD0123831FF:410E44EA9935307B]:0)
> :   at java.lang.UNIXProcess$1.run(UNIXProcess.java:111)
> :   at java.lang.UNIXProcess$1.run(UNIXProcess.java:93)
> :   at java.security.AccessController.doPrivileged(Native Method)
> :   at java.lang.UNIXProcess.(UNIXProcess.java:91)
> :   at java.lang.ProcessImpl.start(ProcessImpl.java:130)
> :   at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
> :   at java.lang.Runtime.exec(Runtime.java:617)
> :   at java.lang.Runtime.exec(Runtime.java:450)
> :   at java.lang.Runtime.exec(Runtime.java:347)
> :   at 
> org.apache.solr.handler.admin.SystemInfoHandler.execute(SystemInfoHandler.java:220)
> :   at 
> org.apache.solr.handler.admin.SystemInfoHandler.getSystemInfo(SystemInfoHandler.java:176)
> :   at 
> org.apache.solr.handler.admin.SystemInfoHandler.handleRequestBody(SystemInfoHandler.java:97)
> :   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> :   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1980)
> :   at org.apache.solr.util.TestHarness.query(TestHarness.java:295)
> :   at org.apache.solr.util.TestHarness.query(TestHarness.java:278)
> :   at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:693)
> :   at 
> org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:115)
> :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> :   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> :   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> :   at java.lang.reflect.Method.invoke(Method.java:606)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> :   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
> :   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> :   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> :   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> :   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> :   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> :   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> :   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> :   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> :   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
> :   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
> :   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> :   at 
> com.carrotsearch.

Re: [JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4757 - Still Failing

2014-07-01 Thread Chris Hostetter

This "posix_spawn is not a supported process launch mechanism on this 
platform." gem has hit us before, see my previous comment...

https://mail-archives.apache.org/mod_mbox/lucene-dev/201406.mbox/%3Calpine.DEB.2.02.1406171421280.10600@frisbee%3E

...but as far as i know, this is the only time it's ever happened on a 
non-MacOS setup.

anybody have any clue what's going on here?




: Date: Tue, 1 Jul 2014 13:10:09 + (UTC)
: From: Apache Jenkins Server 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4757 - Still
: Failing
: 
: Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4757/
: 
: 1 tests failed.
: FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers
: 
: Error Message:
: posix_spawn is not a supported process launch mechanism on this platform.
: 
: Stack Trace:
: java.lang.Error: posix_spawn is not a supported process launch mechanism on 
this platform.
:   at 
__randomizedtesting.SeedInfo.seed([AC10FFD0123831FF:410E44EA9935307B]:0)
:   at java.lang.UNIXProcess$1.run(UNIXProcess.java:111)
:   at java.lang.UNIXProcess$1.run(UNIXProcess.java:93)
:   at java.security.AccessController.doPrivileged(Native Method)
:   at java.lang.UNIXProcess.(UNIXProcess.java:91)
:   at java.lang.ProcessImpl.start(ProcessImpl.java:130)
:   at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
:   at java.lang.Runtime.exec(Runtime.java:617)
:   at java.lang.Runtime.exec(Runtime.java:450)
:   at java.lang.Runtime.exec(Runtime.java:347)
:   at 
org.apache.solr.handler.admin.SystemInfoHandler.execute(SystemInfoHandler.java:220)
:   at 
org.apache.solr.handler.admin.SystemInfoHandler.getSystemInfo(SystemInfoHandler.java:176)
:   at 
org.apache.solr.handler.admin.SystemInfoHandler.handleRequestBody(SystemInfoHandler.java:97)
:   at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
:   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1980)
:   at org.apache.solr.util.TestHarness.query(TestHarness.java:295)
:   at org.apache.solr.util.TestHarness.query(TestHarness.java:278)
:   at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:693)
:   at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:115)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:606)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
:   at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:   at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
: 

[jira] [Commented] (LUCENE-5755) Explore alternative build systems

2014-07-01 Thread Matt Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049094#comment-14049094
 ] 

Matt Davis commented on LUCENE-5755:


Gradle does use which ever junit version you specific. For example:

dependencies {
testCompile 'junit:junit:[4.10,)'
}

And because of the annotation RunWith(RandomizedRunner.class) and the 
configured dependency to 
'com.carrotsearch.randomizedtesting:randomizedtesting-runner:2.1.3' it uses the 
RandomizedRunner from CarrotSearch.  As I said before it also launches a 
separate JVM.  The actually issue is that it switches the system.out before 
each test to capture the output.  See the following classes:
https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/groovy/org/gradle/api/internal/tasks/testing/processors/CaptureTestOutputTestResultProcessor.java
https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/groovy/org/gradle/api/internal/tasks/testing/junit/JUnitTestClassProcessor.java
https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/groovy/org/gradle/api/internal/tasks/testing/junit/JULRedirector.java
https://github.com/gradle/gradle/blob/master/subprojects/core/src/main/groovy/org/gradle/logging/internal/DefaultStandardOutputRedirector.java
https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/groovy/org/gradle/api/internal/tasks/testing/junit/JUnitTestFramework.java
https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/groovy/org/gradle/api/tasks/testing/Test.java

If I disable the System.out check 
(https://github.com/mdavis95/lucene-solr/commit/1b8798894e9f2c52b905eb5f259ef4108bb0b6d7),
 and allow lucene benchmark to find alg files through a hack 
(https://github.com/mdavis95/lucene-solr/commit/6be7c4a3b68fdc9ee749674d101a1c75b2f725be).
  A whole lot of tests start to past.  

For the lucene core I have 783 test successfull, 518 skipped, and 5 failures.  
The only 5 that failed were in TestVirtualMethod.

java.lang.RuntimeException: Suite class 
org.apache.lucene.util.TestVirtualMethod$TestClass1 should be public.




> Explore alternative build systems
> -
>
> Key: LUCENE-5755
> URL: https://issues.apache.org/jira/browse/LUCENE-5755
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
> It's not even the tool's fault; it seems Lucene builds just hit the borders 
> of what it can do, especially in terms of submodule dependencies etc.
> I don't think Maven will help much too, given certain things I'd like to have 
> in the build (for example collect all tests globally for a single execution 
> phase at the end of the build, to support better load-balancing).
> I'd like to explore Gradle as an alternative. This task is a notepad for 
> thoughts and experiments.
> An example of a complex (?) gradle build is javafx, for example.
> http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6179) ManagedResource repeatedly logs warnings when not used

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049093#comment-14049093
 ] 

ASF subversion and git services commented on SOLR-6179:
---

Commit 1607150 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1607150 ]

SOLR-6179: Include the RestManager stored data file to prevent warning when 
starting the example (and to prevent dirty checkouts when running example from 
svn)

> ManagedResource repeatedly logs warnings when not used
> --
>
> Key: SOLR-6179
> URL: https://issues.apache.org/jira/browse/SOLR-6179
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8, 4.8.1
> Environment: 
>Reporter: Hoss Man
>Assignee: Timothy Potter
>
> These messages are currently logged as WARNings, and should either be 
> switched to INFO level (or made more sophisticated so that it can tell when 
> solr is setup for managed resources but the data isn't available)...
> {noformat}
> 2788 [coreLoadExecutor-5-thread-1] WARN  org.apache.solr.rest.ManagedResource 
>  – No stored data found for /rest/managed
> 2788 [coreLoadExecutor-5-thread-1] WARN  org.apache.solr.rest.ManagedResource 
>  – No registered observers for /rest/managed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2245) MailEntityProcessor Update

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049082#comment-14049082
 ] 

ASF subversion and git services commented on SOLR-2245:
---

Commit 1607147 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1607147 ]

SOLR-2245: Numerous improvements to the MailEntityProcessor

> MailEntityProcessor Update
> --
>
> Key: SOLR-2245
> URL: https://issues.apache.org/jira/browse/SOLR-2245
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4, 1.4.1
>Reporter: Peter Sturge
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.patch, 
> SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.zip
>
>
> This patch addresses a number of issues in the MailEntityProcessor 
> contrib-extras module.
> The changes are outlined here:
> * Added an 'includeContent' entity attribute to allow specifying content to 
> be included independently of processing attachments
>  e.g.  
> would include message content, but not attachment content
> * Added a synonym called 'processAttachments', which is synonymous to the 
> mis-spelled (and singular) 'processAttachement' property. This property 
> functions the same as processAttachement. Default= 'true' - if either is 
> false, then attachments are not processed. Note that only one of these should 
> really be specified in a given  tag.
> * Added a FLAGS.NONE value, so that if an email has no flags (i.e. it is 
> unread, not deleted etc.), there is still a property value stored in the 
> 'flags' field (the value is the string "none")
> Note: there is a potential backward compat issue with FLAGS.NONE for clients 
> that expect the absence of the 'flags' field to mean 'Not read'. I'm 
> calculating this would be extremely rare, and is inadviasable in any case as 
> user flags can be arbitrarily set, so fixing it up now will ensure future 
> client access will be consistent.
> * The folder name of an email is now included as a field called 'folder' 
> (e.g. folder=INBOX.Sent). This is quite handy in search/post-indexing 
> processing
> * The addPartToDocument() method that processes attachments is significantly 
> re-written, as there looked to be no real way the existing code would ever 
> actually process attachment content and add it to the row data
> Tested on the 3.x trunk with a number of popular imap servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 88910 - Failure!

2014-07-01 Thread Robert Muir
I committed a fix.

On Tue, Jul 1, 2014 at 8:37 AM, Robert Muir  wrote:
> one of these guys (either that stringunion automaton, or the "naive"
> one), isnt really deterministic and has transitions to dead states.
>
> On Tue, Jul 1, 2014 at 8:20 AM,   wrote:
>> Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/88910/
>>
>> 1 tests failed.
>> REGRESSION:  org.apache.lucene.util.automaton.TestOperations.testStringUnion
>>
>> Error Message:
>>
>>
>> Stack Trace:
>> java.lang.AssertionError
>> at 
>> __randomizedtesting.SeedInfo.seed([5B10EDB8D55580F8:57D561ED8F9CA461]:0)
>> at 
>> org.apache.lucene.util.automaton.Operations.subsetOf(Operations.java:414)
>> at 
>> org.apache.lucene.util.automaton.Operations.sameLanguage(Operations.java:367)
>> at 
>> org.apache.lucene.util.automaton.TestOperations.testStringUnion(TestOperations.java:37)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
>> at 
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>> at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>> at 
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>> at 
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>> at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>> at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>> at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
>> at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
>> at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>> at 
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>> at 
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>> at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>> at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
>> at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>> at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>> at 
>> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>> at java.lang.Thread.run(Thread.java:745)
>>
>>
>>
>>
>> Bui

[jira] [Updated] (SOLR-6220) Replica placement startegy for solrcloud

2014-07-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6220:
-

Description: 
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later change it to suit the 
needs of the system

All configurations are per collection basis. The rules are applied whenever a 
replica is created in any of the shards in a given collection during

 * collection creation
 * shard splitting
 * add replica
 * createsshard

There are two aspects to how replicas are placed: snitch and placement. 

h2.snitch 
How to identify the tags of nodes. Snitches are configured through collection 
create command with the snitch prefix  . eg: snitch.type=EC2Snitch.

The system provides the following implicit tag names which cannot be used by 
other snitches
 * node : The solr nodename
 * host : The hostname
 * ip : The ip address of the host
 * cores : This is a dynamic varibale which gives the core count at any given 
point 
 * disk : This is a dynamic variable  which gives the available disk space at 
any given point


There will a few snitches provided by the system such as 

h3.EC2Snitch
Provides two tags called dc, rack from the region and zone values in EC2

h3.IPSnitch 
Use the IP to infer the “dc” and “rack” values

h3.NodePropertySnitch 
This lets users provide system properties to each node with tagname and value .

example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
particular node will have two tags “tag-x” and “tag-y” .
 
h3.RestSnitch 
 Which lets the user configure a url which the server can invoke and get all 
the tags for a given node. 

This takes extra parameters in create command
example:  
{{snitch.type=RestSnitch&snitch.url=http://snitchserverhost:port?nodename={}}}
The response of the  rest call   
{{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}

must be in either json format or properties format. 
eg: 
{code:JavaScript}
{
“tag-x”:”x-val”,
“tag-y”:”y-val”
}
{code}
or

{noformat}
tag-x=x-val
tag-y=y-val
{noformat}
h3.ManagedSnitch
This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
user should be able to manage the tags and values of each node through a 
collection API 


h2.Placement 

This tells how many replicas for a given shard needs to be assigned to nodes 
with the given key value pairs. These parameters will be passed on to the 
collection CREATE api as a parameter  "placement" . The values will be saved in 
the state of the collection as follows
{code:Javascript}
{
 “mycollection”:{
  “snitch”: {
  type:“EC2Snitch”
}
  “placement”:{
   “key1”: “value1”,
   “key2”: “value2”,
   }
}
{code}

A rule consists of 2 parts

 * LHS or the qualifier : The format is \{shardname}.\{replicacount} .Use 
the wild card “*” for qualifying all
 * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
name and values are provided by the snitch. The supported operands are
 ** -> :  equals
 ** >: greater than . Only applicable for numeric tags
 ** < : less than , Only applicable to numeric tags
** !  : NOT or not equals

Each collection can have any number of rules. As long as the rules do not 
conflict with each other it should be OK. Or else an error is thrown


Example rules:
 * “shard1.1”:“dc->dc1,rack->168” : This would assign exactly 1 replica for 
shard1 with nodes having tags   “dc=dc1,rack=168”.
 *  “shard1.1+”:“dc->dc1,rack->168”  : Same as above but assigns atleast one 
replica to the tag val combination
 * “*.1”:“dc->dc1” :  For all shards keep exactly one replica in dc:dc1
 * “*.1+”:”dc->dc2”  : At least one  replica needs to be in dc:dc2
 * “*.2-”:”dc->dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
 * “shard1.*”:”rack->730”  :  All replicas of shard1 will go to rack 730
 * “shard1.1”:“node->192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
the node 192.167.1.28983_solr
 * “shard1.* : “rack!738”  : No replica of shard1 should go to rack 738 
 * “shard1.* : “host!192.168.89.91”  : No replica of shard1 should go to host 
192.168.89.91
* “\*.*”: “cores<5”: All replicas should be created in nodes with  less than 5 
cores  
 * “\*.*”:”disk>20gb” :  All replicas must be created in nodes with disk space 
greater than 20gb

In the collection create API all the placement rules are provided as a 
parameter called placement and multiple rules are separated with "|" 
example:
{noformat}
snitch.type=EC2Snitch&placement=*.1:dc->dc1|*.2-:dc->dc3|shard1.*:rack!738 
{noformat}

  was:
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later c

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-01 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049038#comment-14049038
 ] 

Noble Paul commented on SOLR-5473:
--

I'm planning to commit this to trunk in a day

> Make one state.json per collection
> --
>
> Key: SOLR-5473
> URL: https://issues.apache.org/jira/browse/SOLR-5473
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0
>
> Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, 
> ec2-50-16-38-73_solr.log
>
>
> As defined in the parent issue, store the states of each collection under 
> /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-01 Thread Nicola Buso (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049019#comment-14049019
 ] 

Nicola Buso commented on LUCENE-5801:
-

Shai,

if I accept an OrdinalsReader in ctor I would need later in:
OrdinalMappingAtomicReader.getBinaryDocValues(String field)

to modify it to respect the parameter 'field' and currently 
DocValuesOrdinalsReader.field is private final.
Let me know.


> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
> Attachments: LUCENE-5801.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6179) ManagedResource repeatedly logs warnings when not used

2014-07-01 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049014#comment-14049014
 ] 

Timothy Potter commented on SOLR-6179:
--

Going to keep an eye on this on Jenkins today and then backport (and update the 
CHANGES.txt as needed).

> ManagedResource repeatedly logs warnings when not used
> --
>
> Key: SOLR-6179
> URL: https://issues.apache.org/jira/browse/SOLR-6179
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8, 4.8.1
> Environment: 
>Reporter: Hoss Man
>Assignee: Timothy Potter
>
> These messages are currently logged as WARNings, and should either be 
> switched to INFO level (or made more sophisticated so that it can tell when 
> solr is setup for managed resources but the data isn't available)...
> {noformat}
> 2788 [coreLoadExecutor-5-thread-1] WARN  org.apache.solr.rest.ManagedResource 
>  – No stored data found for /rest/managed
> 2788 [coreLoadExecutor-5-thread-1] WARN  org.apache.solr.rest.ManagedResource 
>  – No registered observers for /rest/managed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6179) ManagedResource repeatedly logs warnings when not used

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048998#comment-14048998
 ] 

ASF subversion and git services commented on SOLR-6179:
---

Commit 1607128 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1607128 ]

SOLR-6179: Fix unit test breakage by using InMemory storage if config dir is 
not writable.

> ManagedResource repeatedly logs warnings when not used
> --
>
> Key: SOLR-6179
> URL: https://issues.apache.org/jira/browse/SOLR-6179
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8, 4.8.1
> Environment: 
>Reporter: Hoss Man
>Assignee: Timothy Potter
>
> These messages are currently logged as WARNings, and should either be 
> switched to INFO level (or made more sophisticated so that it can tell when 
> solr is setup for managed resources but the data isn't available)...
> {noformat}
> 2788 [coreLoadExecutor-5-thread-1] WARN  org.apache.solr.rest.ManagedResource 
>  – No stored data found for /rest/managed
> 2788 [coreLoadExecutor-5-thread-1] WARN  org.apache.solr.rest.ManagedResource 
>  – No registered observers for /rest/managed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-01 Thread Nicola Buso (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicola Buso updated LUCENE-5801:


Attachment: LUCENE-5801.patch

Simple patch to test it working

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
> Attachments: LUCENE-5801.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6220) Replica placement startegy for solrcloud

2014-07-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6220:
-

Description: 
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later change it to suit the 
needs of the system

All configurations are per collection basis. The rules are applied whenever a 
replica is created in any of the shards in a given collection during

 * collection creation
 * shard splitting
 * add replica
 * createsshard

There are two aspects to how replicas are placed: snitch and placement. 

h2.snitch 
How to identify the tags of nodes. Snitches are configured through collection 
create command with the snitch prefix  . eg: snitch.type=EC2Snitch.

The system provides the following implicit tag names which cannot be used by 
other snitches
 * node : The solr nodename
 * host : The hostname
 * ip : The ip address of the host
 * cores : This is a dynamic varibale which gives the core count at any given 
point 
 * disk : This is a dynamic variable  which gives the available disk space at 
any given point


There will a few snitches provided by the system such as 

h3.EC2Snitch
Provides two tags called dc, rack from the region and zone values in EC2

h3.IPSnitch 
Use the IP to infer the “dc” and “rack” values

h3.NodePropertySnitch 
This lets users provide system properties to each node with tagname and value .

example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
particular node will have two tags “tag-x” and “tag-y” .
 
h3.RestSnitch 
 Which lets the user configure a url which the server can invoke and get all 
the tags for a given node. 

This takes extra parameters in create command
example:  
{{snitch.type=RestSnitch&snitch.url=http://snitchserverhost:port?nodename={}}}
The response of the  rest call   
{{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}

must be in either json format or properties format. 
eg: 
{code:JavaScript}
{
“tag-x”:”x-val”,
“tag-y”:”y-val”
}
{code}
or

{noformat}
tag-x=x-val
tag-y=y-val
{noformat}
h3.ManagedSnitch
This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
user should be able to manage the tags and values of each node through a 
collection API 


h2.Placement 

This tells how many replicas for a given shard needs to be assigned to nodes 
with the given key value pairs. These parameters will be passed on to the 
collection CREATE api as a parameter  "placement" . The values will be saved in 
the state of the collection as follows
{code:Javascript}
{
 “mycollection”:{
  “snitch”: {
  type:“EC2Snitch”
}
  “placement”:{
   “key1”: “value1”,
   “key2”: “value2”,
   }
}
{code}

A rule consists of 2 parts

 * LHS or the qualifier : The format is \{shardname}.\{replicacount} .Use 
the wild card “*” for qualifying all
 * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
name and values are provided by the snitch. The supported operands are
 ** -> :  equals
 ** >: greater than . Only applicable for numeric tags
 ** < : less than , Only applicable to numeric tags
** !  : NOT or not equals

Each collection can have any number of rules. As long as the rules do not 
conflict with each other it should be OK. Or else an error is thrown


Example rules:
 * “shard1.1”:“dc->dc1,rack->168” : This would assign exactly 1 replica for 
shard1 with nodes having tags   “dc=dc1,rack=168”.
 *  “shard1.1+”:“dc->dc1,rack->168”  : Same as above but assigns atleast one 
replica to the tag val combination
 * “*.1”:“dc->dc1” :  For all shards keep exactly one replica in dc:dc1
 * “*.1+”:”dc->dc2”  : At least one  replica needs to be in dc:dc2
 * “*.2-”:”dc->dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
 * “shard1.*”:”rack->730”  :  All replicas of shard1 will go to rack 730
 * “shard1.1”:“node->192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
the node 192.167.1.28983_solr
 * “shard1.* : “rack!738”  : No replica of shard1 should go to rack 738 
 * “shard1.* : “host!192.168.89.91”  : No replica of shard1 should go to host 
192.168.89.91
* “\*.*”: “cores<5”: All replicas should be created in nodes with  less than 5 
cores  
 * “\*.*”:”disk>20gb” :  All replicas must be created in nodes with disk space 
greater than 20gb

In the collection create API all the placement rules are provided as a 
parameter called placement and multiple rules are separated with "|" 
example:
{noformat}
snitch.type=EC2Snitch&placement=*.1:dc->dc1|*.2-:dc->dc3|!shard1.*:rack->738 
{noformat}

  was:
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later

[jira] [Commented] (SOLR-6157) ReplicationFactorTest hangs

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048956#comment-14048956
 ] 

ASF subversion and git services commented on SOLR-6157:
---

Commit 1607110 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1607110 ]

SOLR-6157: Refactor the ensureAllReplicasAreActive method into base class and 
ensure the ClusterState is updated to address intermittent test failures on 
Jenkins.

> ReplicationFactorTest hangs
> ---
>
> Key: SOLR-6157
> URL: https://issues.apache.org/jira/browse/SOLR-6157
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Reporter: Uwe Schindler
>Assignee: Timothy Potter
> Fix For: 4.10
>
>
> See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
> You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6220) Replica placement startegy for solrcloud

2014-07-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6220:
-

Description: 
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later change it to suit the 
needs of the system

All configurations are per collection basis. The rules are applied whenever a 
replica is created in any of the shards in a given collection during

 * collection creation
 * shard splitting
 * add replica
 * createsshard

There are two aspects to how replicas are placed: snitch and placement. 

h2.snitch 
How to identify the tags of nodes. Snitches are configured through collection 
create command with the snitch prefix  . eg: snitch.type=EC2Snitch.

The system provides the following implicit tag names which cannot be used by 
other snitches
 * node : The solr nodename
 * host : The hostname
 * ip : The ip address of the host
 * cores : This is a dynamic varibale which gives the core count at any given 
point 
 * disk : This is a dynamic variable  which gives the available disk space at 
any given point


There will a few snitches provided by the system such as 

h3.EC2Snitch
Provides two tags called dc, rack from the region and zone values in EC2

h3.IPSnitch 
Use the IP to infer the “dc” and “rack” values

h3.NodePropertySnitch 
This lets users provide system properties to each node with tagname and value .

example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
particular node will have two tags “tag-x” and “tag-y” .
 
h3.RestSnitch 
 Which lets the user configure a url which the server can invoke and get all 
the tags for a given node. 

This takes extra parameters in create command
example:  
{{snitch.type=RestSnitch&snitch.url=http://snitchserverhost:port?nodename={}}}
The response of the  rest call   
{{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}

must be in either json format or properties format. 
eg: 
{code:JavaScript}
{
“tag-x”:”x-val”,
“tag-y”:”y-val”
}
{code}
or

{noformat}
tag-x=x-val
tag-y=y-val
{noformat}
h3.ManagedSnitch
This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
user should be able to manage the tags and values of each node through a 
collection API 


h2.Placement 

This tells how many replicas for a given shard needs to be assigned to nodes 
with the given key value pairs. These parameters will be passed on to the 
collection CREATE api as a parameter  "placement" . The values will be saved in 
the state of the collection as follows
{code:Javascript}
{
 “mycollection”:{
  “snitch”: {
  type:“EC2Snitch”
}
  “placement”:{
   “key1”: “value1”,
   “key2”: “value2”,
   }
}
{code}

A rule consists of 2 parts

 * LHS or the qualifier : The format is \{shardname}.\{replicacount} .Use 
the wild card “*” for qualifying all
 * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
name and values are provided by the snitch. The supported operands are
 ** -> :  equals
 ** >: greater than . Only applicable for numeric tags
 ** < : less than , Only applicable to numeric tags
** !  : NOT or not equals

Each collection can have any number of rules. As long as the rules do not 
conflict with each other it should be OK. Or else an error is thrown


Example rules:
 * “shard1.1”:“dc->dc1&rack->168” : This would assign exactly 1 replica for 
shard1 with nodes having tags   “dc=dc1,rack=168”.
 *  “shard1.1+”:“dc->dc1&rack->168”  : Same as above but assigns atleast one 
replica to the tag val combination
 * “*.1”:“dc->dc1” :  For all shards keep exactly one replica in dc:dc1
 * “*.1+”:”dc->dc2”  : At least one  replica needs to be in dc:dc2
 * “*.2-”:”dc->dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
 * “shard1.*”:”rack->730”  :  All replicas of shard1 will go to rack 730
 * “shard1.1”:“node->192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
the node 192.167.1.28983_solr
 * “shard1.* : “rack!738”  : No replica of shard1 should go to rack 738 
 * “shard1.* : “host!192.168.89.91”  : No replica of shard1 should go to host 
192.168.89.91
* “\*.*”: “cores<5”: All replicas should be created in nodes with  less than 5 
cores  
 * “\*.*”:”disk>20gb” :  All replicas must be created in nodes with disk space 
greater than 20gb

In the collection create API all the placement rules are provided as a 
parameter called placement and multiple rules are separated with "|" 
example:
{noformat}
snitch.type=EC2Snitch&placement=*.1:dc->dc1|*.2-:dc->dc3|!shard1.*:rack->738 
{noformat}

  was:
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later

[jira] [Updated] (SOLR-6220) Replica placement startegy for solrcloud

2014-07-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6220:
-

Description: 
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later change it to suit the 
needs of the system

All configurations are per collection basis. The rules are applied whenever a 
replica is created in any of the shards in a given collection during

 * collection creation
 * shard splitting
 * add replica
 * createsshard

There are two aspects to how replicas are placed: snitch and placement. 

h2.snitch 
How to identify the tags of nodes. Snitches are configured through collection 
create command with the snitch prefix  . eg: snitch.type=EC2Snitch.

The system provides the following implicit tag names which cannot be used by 
other snitches
 * node : The solr nodename
 * host : The hostname
 * ip : The ip address of the host
 * cores : This is a dynamic varibale which gives the core count at any given 
point 
 * disk : This is a dynamic variable  which gives the available disk space at 
any given point


There will a few snitches provided by the system such as 

h3.EC2Snitch
Provides two tags called dc, rack from the region and zone values in EC2

h3.IPSnitch 
Use the IP to infer the “dc” and “rack” values

h3.NodePropertySnitch 
This lets users provide system properties to each node with tagname and value .

example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
particular node will have two tags “tag-x” and “tag-y” .
 
h3.RestSnitch 
 Which lets the user configure a url which the server can invoke and get all 
the tags for a given node. 

This takes extra parameters in create command
example:  
{{snitch.type=RestSnitch&snitch.url=http://snitchserverhost:port?nodename={}}}
The response of the  rest call   
{{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}

must be in either json format or properties format. 
eg: 
{code:JavaScript}
{
“tag-x”:”x-val”,
“tag-y”:”y-val”
}
{code}
or

{noformat}
tag-x=x-val
tag-y=y-val
{noformat}
h3.ManagedSnitch
This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
user should be able to manage the tags and values of each node through a 
collection API 


h2.Placement 

This tells how many replicas for a given shard needs to be assigned to nodes 
with the given key value pairs. These parameters will be passed on to the 
collection CREATE api as a parameter  "placement" . The values will be saved in 
the state of the collection as follows
{code:Javascript}
{
 “mycollection”:{
  “snitch”: {
  type:“EC2Snitch”
}
  “placement”:{
   “key1”: “value1”,
   “key2”: “value2”,
   }
}
{code}

A rule consists of 2 parts

 * LHS or the qualifier : The format is \{shardname}.\{replicacount} .Use 
the wild card “*” for qualifying all
 * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
name and values are provided by the snitch. The supported operands are
 ** -> :  equals
 ** >: greater than . Only applicable for numeric tags
 ** < : less than , Only applicable to numeric tags
** !  : NOT or not equals

Each collection can have any number of rules. As long as the rules do not 
conflict with each other it should be OK. Or else an error is thrown


Example rules:
 * “shard1.1”:“dc->dc1&rack->168” : This would assign exactly 1 replica for 
shard1 with nodes having tags   “dc=dc1,rack=168”.
 *  “shard1.1+”:“dc->dc1&rack->168”  : Same as above but assigns atleast one 
replica to the tag val combination
 * “*.1”:“dc->dc1” :  For all shards keep exactly one replica in dc:dc1
 * “*.1+”:”dc->dc2”  : At least one  replica needs to be in dc:dc2
 * “*.2-”:”dc->dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
 * “shard1.*”:”rack->730”  :  All replicas of shard1 will go to rack 730
 * “shard1.1”:“node->192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
the node 192.167.1.28983_solr
 * “shard1.* : “rack!738”  : No replica of shard1 should go to rack 738 
 * “shard1.* : “host!192.168.89.91”  : No replica of shard1 should go to host 
192.168.89.91
* “*.*”: “cores<5”: All replicas should be created in nodes with  less than 5 
cores  
 * “*.*”:”disk>20gb” :  All replicas must be created in nodes with disk space 
greater than 20gb

In the collection create API all the placement rules are provided as a 
parameter called placement and multiple rules are separated with "|" 
example:
{noformat}
snitch.type=EC2Snitch&placement=*.1:dc->dc1|*.2-:dc->dc3|!shard1.*:rack->738 
{noformat}

  was:
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later c

[jira] [Updated] (SOLR-6220) Replica placement startegy for solrcloud

2014-07-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6220:
-

Description: 
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later change it to suit the 
needs of the system

All configurations are per collection basis. The rules are applied whenever a 
replica is created in any of the shards in a given collection during

 * collection creation
 * shard splitting
 * add replica
 * createsshard

There are two aspects to how replicas are placed: snitch and placement. 

h2.snitch 
How to identify the tags of nodes. Snitches are configured through collection 
create command with the snitch prefix  . eg: snitch.type=EC2Snitch.

The system provides the following implicit tag names which cannot be used by 
other snitches
 * node : The solr nodename
 * host : The hostname
 * ip : The ip address of the host
 * cores : This is a dynamic varibale which gives the core count at any given 
point 
 * disk : This is a dynamic variable  which gives the available disk space at 
any given point


There will a few snitches provided by the system such as 

h3.EC2Snitch
Provides two tags called dc, rack from the region and zone values in EC2

h3.IPSnitch 
Use the IP to infer the “dc” and “rack” values

h3.NodePropertySnitch 
This lets users provide system properties to each node with tagname and value .

example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
particular node will have two tags “tag-x” and “tag-y” .
 
h3.RestSnitch 
 Which lets the user configure a url which the server can invoke and get all 
the tags for a given node. 

This takes extra parameters in create command
example:  
{{snitch.type=RestSnitch&snitch.url=http://snitchserverhost:port?nodename={}}}
The response of the  rest call   
{{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}

must be in either json format or properties format. 
eg: 
{code:JavaScript}
{
“tag-x”:”x-val”,
“tag-y”:”y-val”
}
{code}
or

{noformat}
tag-x=x-val
tag-y=y-val
{noformat}
h3.ManagedSnitch
This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
user should be able to manage the tags and values of each node through a 
collection API 


h2.Placement 

This tells how many replicas for a given shard needs to be assigned to nodes 
with the given key value pairs. These parameters will be passed on to the 
collection CREATE api as a parameter  "placement" . The values will be saved in 
the state of the collection as follows
{code:Javascript}
{
 “mycollection”:{
  “snitch”: {
  type:“EC2Snitch”
}
  “placement”:{
   “key1”: “value1”,
   “key2”: “value2”,
   }
}
{code}

A rule consists of 2 parts

 * LHS or the qualifier : The format is \{shardname}.\{replicacount} .Use 
the wild card “*” for qualifying all. Use the \(!) operand for exclusion
 * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
name and values are provided by the snitch. The supported operands are
 ** -> :  equals
 ** >: greater than . Only applicable for numeric tags
 **< : less than , Only applicable to numeric tags

Each collection can have any number of rules. As long as the rules do not 
conflict with each other it should be OK. Or else an error is thrown


Example rules:
 * “shard1.1”:“dc->dc1&rack->168” : This would assign exactly 1 replica for 
shard1 with nodes having tags   “dc=dc1,rack=168”.
 *  “shard1.1+”:“dc->dc1&rack->168”  : Same as above but assigns atleast one 
replica to the tag val combination
 * “*.1”:“dc->dc1” :  For all shards keep exactly one replica in dc:dc1
 * “*.1+”:”dc->dc2”  : At least one  replica needs to be in dc:dc2
 * “*.2-”:”dc->dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
 * “shard1.*”:”rack->730”  :  All replicas of shard1 will go to rack 730
 * “shard1.1”:“node->192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
the node 192.167.1.28983_solr
 * “!shard1.* : “rack->738”  : No replica of shard1 should go to rack 738 
 * “!shard1.* : “host->192.168.89.91”  : No replica of shard1 should go to host 
192.168.89.91
* “*.*”: “cores<5”: All replicas should be created in nodes with  less than 5 
cores  
 * “*.*”:”disk>20gb” :  All replicas must be created in nodes with disk space 
greater than 20gb

In the collection create API all the placement rules are provided as a 
parameter called placement and multiple rules are separated with "|" 
example:
{noformat}
snitch.type=EC2Snitch&placement=*.1:dc->dc1|*.2-:dc->dc3|!shard1.*:rack->738 
{noformat}

  was:
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replic

[jira] [Updated] (SOLR-6220) Replica placement startegy for solrcloud

2014-07-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6220:
-

Summary: Replica placement startegy for solrcloud  (was: Replica placement 
startegy dor solrcloud)

> Replica placement startegy for solrcloud
> 
>
> Key: SOLR-6220
> URL: https://issues.apache.org/jira/browse/SOLR-6220
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> h1.Objective
> Most cloud based systems allow to specify rules on how the replicas/nodes of 
> a cluster are allocated . Solr should have a flexible mechanism through which 
> we should be able to control allocation of replicas or later change it to 
> suit the needs of the system
> All configurations are per collection basis. The rules are applied whenever a 
> replica is created in any of the shards in a given collection during
>  * collection creation
>  * shard splitting
>  * add replica
>  * createsshard
> There are two aspects to how replicas are placed: snitch and placement. 
> h2.snitch 
> How to identify the tags of nodes. Snitches are configured through collection 
> create command with the snitch prefix  . eg: snitch.type=EC2Snitch.
> The system provides the following implicit tag names which cannot be used by 
> other snitches
>  * node : The solr nodename
>  * host : The hostname
>  * ip : The ip address of the host
>  * cores : This is a dynamic varibale which gives the core count at any given 
> point 
>  * disk : This is a dynamic variable  which gives the available disk space at 
> any given point
> There will a few snitches provided by the system such as 
> h3.EC2Snitch
> Provides two tags called dc, rack from the region and zone values in EC2
> h3.IPSnitch 
> Use the IP to infer the “dc” and “rack” values
> h3.NodePropertySnitch 
> This lets users provide system properties to each node with tagname and value 
> .
> example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
> particular node will have two tags “tag-x” and “tag-y” .
>  
> h3.RestSnitch 
>  Which lets the user configure a url which the server can invoke and get all 
> the tags for a given node. 
> This takes extra parameters in create command
> example:  
> {{snitch.type=RestSnitch&snitch.url=http://snitchserverhost:port?nodename={}}}
> The response of the  rest call   
> {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}
> must be in either json format or properties format. 
> eg: 
> {code:JavaScript}
> {
> “tag-x”:”x-val”,
> “tag-y”:”y-val”
> }
> {code}
> or
> {noformat}
> tag-x=x-val
> tag-y=y-val
> {noformat}
> h3.ManagedSnitch
> This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
> user should be able to manage the tags and values of each node through a 
> collection API 
> h2.Placement 
> This tells how many replicas for a given shard needs to be assigned to nodes 
> with the given key value pairs. These parameters will be passed on to the 
> collection CREATE api as a parameter  "placement" . The values will be saved 
> in the state of the collection as follows
> {code:Javascript}
> {
>  “mycollection”:{
>   “snitch”: {
>   type:“EC2Snitch”
> }
>   “placement”:{
>“key1”: “value1”,
>“key2”: “value2”,
>}
> }
> {code}
> A rule consists of 2 parts
>  * LHS or the qualifier : The format is \{shardname}.\{replicacount} .Use 
> the wild card “*” for qualifying all. Use the \(!) operand for exclusion
>  * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
> name and values are provided by the snitch. The supported operands are
>  ** -> :  equals
>  ** >: greater than . Only applicable for numeric tags
>  **< : less than , Only applicable to numeric tags
> Each collection can have any number of rules. As long as the rules do not 
> conflict with each other it should be OK. Or else an error is thrown
> Example rules:
>  * “shard1:1”:“dc->dc1&rack->168” : This would assign exactly 1 replica for 
> shard1 with nodes having tags   “dc=dc1,rack=168”.
>  *  “shard1:1+”:“dc->dc1&rack->168”  : Same as above but assigns atleast one 
> replica to the tag val combination
>  * “*.1”:“dc->dc1” :  For all shards keep exactly one replica in dc:dc1
>  * “*.1+”:”dc->dc2”  : At least one  replica needs to be in dc:dc2
>  * “*.2-”:”dc->dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
>  * “shard1.*”:”rack->730”  :  All replicas of shard1 will go to rack 730
>  * “shard1.1”:“node->192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
> the node 192.167.1.28983_solr
>  * “!shard1.* : “rack->738”  : No replica of shard1 should go to rack 738 
>  * “!shard1.* : “host->192.168.89.91”  : No replica of shard1 should go to 
> host 192.168.89.91
> * “*.*”: “cores<5”: All replicas should be 

[jira] [Updated] (SOLR-6220) Replica placement startegy for solrcloud

2014-07-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6220:
-

Description: 
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later change it to suit the 
needs of the system

All configurations are per collection basis. The rules are applied whenever a 
replica is created in any of the shards in a given collection during

 * collection creation
 * shard splitting
 * add replica
 * createsshard

There are two aspects to how replicas are placed: snitch and placement. 

h2.snitch 
How to identify the tags of nodes. Snitches are configured through collection 
create command with the snitch prefix  . eg: snitch.type=EC2Snitch.

The system provides the following implicit tag names which cannot be used by 
other snitches
 * node : The solr nodename
 * host : The hostname
 * ip : The ip address of the host
 * cores : This is a dynamic varibale which gives the core count at any given 
point 
 * disk : This is a dynamic variable  which gives the available disk space at 
any given point


There will a few snitches provided by the system such as 

h3.EC2Snitch
Provides two tags called dc, rack from the region and zone values in EC2

h3.IPSnitch 
Use the IP to infer the “dc” and “rack” values

h3.NodePropertySnitch 
This lets users provide system properties to each node with tagname and value .

example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
particular node will have two tags “tag-x” and “tag-y” .
 
h3.RestSnitch 
 Which lets the user configure a url which the server can invoke and get all 
the tags for a given node. 

This takes extra parameters in create command
example:  
{{snitch.type=RestSnitch&snitch.url=http://snitchserverhost:port?nodename={}}}
The response of the  rest call   
{{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}

must be in either json format or properties format. 
eg: 
{code:JavaScript}
{
“tag-x”:”x-val”,
“tag-y”:”y-val”
}
{code}
or

{noformat}
tag-x=x-val
tag-y=y-val
{noformat}
h3.ManagedSnitch
This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
user should be able to manage the tags and values of each node through a 
collection API 


h2.Placement 

This tells how many replicas for a given shard needs to be assigned to nodes 
with the given key value pairs. These parameters will be passed on to the 
collection CREATE api as a parameter  "placement" . The values will be saved in 
the state of the collection as follows
{code:Javascript}
{
 “mycollection”:{
  “snitch”: {
  type:“EC2Snitch”
}
  “placement”:{
   “key1”: “value1”,
   “key2”: “value2”,
   }
}
{code}

A rule consists of 2 parts

 * LHS or the qualifier : The format is \{shardname}.\{replicacount} .Use 
the wild card “*” for qualifying all. Use the \(!) operand for exclusion
 * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
name and values are provided by the snitch. The supported operands are
 ** -> :  equals
 ** >: greater than . Only applicable for numeric tags
 **< : less than , Only applicable to numeric tags

Each collection can have any number of rules. As long as the rules do not 
conflict with each other it should be OK. Or else an error is thrown


Example rules:
 * “shard1:1”:“dc->dc1&rack->168” : This would assign exactly 1 replica for 
shard1 with nodes having tags   “dc=dc1,rack=168”.
 *  “shard1:1+”:“dc->dc1&rack->168”  : Same as above but assigns atleast one 
replica to the tag val combination
 * “*.1”:“dc->dc1” :  For all shards keep exactly one replica in dc:dc1
 * “*.1+”:”dc->dc2”  : At least one  replica needs to be in dc:dc2
 * “*.2-”:”dc->dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
 * “shard1.*”:”rack->730”  :  All replicas of shard1 will go to rack 730
 * “shard1.1”:“node->192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
the node 192.167.1.28983_solr
 * “!shard1.* : “rack->738”  : No replica of shard1 should go to rack 738 
 * “!shard1.* : “host->192.168.89.91”  : No replica of shard1 should go to host 
192.168.89.91
* “*.*”: “cores<5”: All replicas should be created in nodes with  less than 5 
cores  
 * “*.*”:”disk>20gb” :  All replicas must be created in nodes with disk space 
greater than 20gb

In the collection create API all the placement rules are provided as a 
parameter called placement and multiple rules are separated with "|" 
example:
{noformat}
snitch.type=EC2Snitch&placement=*.1:dc->dc1|*.2-:dc->dc3|!shard1.*:rack->738 
{noformat}

  was:
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replic

[jira] [Created] (SOLR-6220) Replica placement startegy dor solrcloud

2014-07-01 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6220:


 Summary: Replica placement startegy dor solrcloud
 Key: SOLR-6220
 URL: https://issues.apache.org/jira/browse/SOLR-6220
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul


h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later change it to suit the 
needs of the system

All configurations are per collection basis. The rules are applied whenever a 
replica is created in any of the shards in a given collection during

 * collection creation
 * shard splitting
 * add replica
 * createsshard

There are two aspects to how replicas are placed: snitch and placement. 

h2.snitch 
How to identify the tags of nodes. Snitches are configured through collection 
create command with the snitch prefix  . eg: snitch.type=EC2Snitch.

The system provides the following implicit tag names which cannot be used by 
other snitches
 * node : The solr nodename
 * host : The hostname
 * ip : The ip address of the host
 * cores : This is a dynamic varibale which gives the core count at any given 
point 
 * disk : This is a dynamic variable  which gives the available disk space at 
any given point


There will a few snitches provided by the system such as 

h3.EC2Snitch
Provides two tags called dc, rack from the region and zone values in EC2

h3.IPSnitch 
Use the IP to infer the “dc” and “rack” values

h3.NodePropertySnitch 
This lets users provide system properties to each node with tagname and value .

example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
particular node will have two tags “tag-x” and “tag-y” .
 
h3.RestSnitch 
 Which lets the user configure a url which the server can invoke and get all 
the tags for a given node. 

This takes extra parameters in create command
example:  
{{snitch.type=RestSnitch&snitch.url=http://snitchserverhost:port?nodename={}}}
The response of the  rest call   
{{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}

must be in either json format or properties format. 
eg: 
{code:JavaScript}
{
“tag-x”:”x-val”,
“tag-y”:”y-val”
}
{code}
or

{noformat}
tag-x=x-val
tag-y=y-val
{noformat}
h3.ManagedSnitch
This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
user should be able to manage the tags and values of each node through a 
collection API 


h2.Placement 

This tells how many replicas for a given shard needs to be assigned to nodes 
with the given key value pairs. These parameters will be passed on to the 
collection CREATE api as a parameter  "placement" . The values will be saved in 
the state of the collection as follows
{code:Javascript}
{
 “mycollection”:{
  “snitch”: {
  type:“EC2Snitch”
}
  “placement”:{
   “key1”: “value1”,
   “key2”: “value2”,
   }
}
{code}

A rule consists of 2 parts

 * LHS or the qualifier : The format is \{shardname}.\{replicacount} .Use 
the wild card “*” for qualifying all. Use the \(!) operand for exclusion
 * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
name and values are provided by the snitch. The supported operands are
 ** -> :  equals
 ** >: greater than . Only applicable for numeric tags
 **< : less than , Only applicable to numeric tags

Each collection can have any number of rules. As long as the rules do not 
conflict with each other it should be OK. Or else an error is thrown


Example rules:
 * “shard1:1”:“dc->dc1&rack->168” : This would assign exactly 1 replica for 
shard1 with nodes having tags   “dc=dc1,rack=168”.
 *  “shard1:1+”:“dc->dc1&rack->168”  : Same as above but assigns atleast one 
replica to the tag val combination
 * “*.1”:“dc->dc1” :  For all shards keep exactly one replica in dc:dc1
 * “*.1+”:”dc->dc2”  : At least one  replica needs to be in dc:dc2
 * “*.2-”:”dc->dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
 * “shard1.*”:”rack->730”  :  All replicas of shard1 will go to rack 730
 * “shard1.1”:“node->192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
the node 192.167.1.28983_solr
 * “!shard1.* : “rack->738”  : No replica of shard1 should go to rack 738 
 * “!shard1.* : “host->192.168.89.91”  : No replica of shard1 should go to host 
192.168.89.91
* “*.*”: “cores<5”: All replicas should be created in nodes with  less than 5 
cores  
 * “*.*”:”disk>20gb” :  All replicas must be created in nodes with disk space 
greater than 20gb

In the collection create API all the placement rules are provided as a 
parameter called placement and multiple rules are separated with "|" 
example:
{noformat}
snitch.type=EC2Snitch&placement=*.1:dc->dc1|*.2-:dc->dc3|!shard1. :rack->738 
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2

[jira] [Commented] (LUCENE-5792) Improve our packed *AppendingLongBuffer

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048934#comment-14048934
 ] 

ASF subversion and git services commented on LUCENE-5792:
-

Commit 1607105 from [~jpountz] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1607105 ]

LUCENE-5792: Simplified *AppendingBuffer APIs.

> Improve our packed *AppendingLongBuffer
> ---
>
> Key: LUCENE-5792
> URL: https://issues.apache.org/jira/browse/LUCENE-5792
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5792.patch, LUCENE-5792.patch, LUCENE-5792.patch
>
>
> Since these classes are writeteable, they need a buffer in order to stage 
> pending changes for efficiency reasons. The issue is that at read-time, the 
> code then needs, for every call to {{get}} to check whether the requested 
> value is in the buffer of pending values or has been packed into main 
> storage, which is inefficient.
> I would like to fix these APIs to separate the writer from the reader, the 
> latter being immutable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5792) Improve our packed *AppendingLongBuffer

2014-07-01 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-5792.
--

Resolution: Fixed

> Improve our packed *AppendingLongBuffer
> ---
>
> Key: LUCENE-5792
> URL: https://issues.apache.org/jira/browse/LUCENE-5792
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5792.patch, LUCENE-5792.patch, LUCENE-5792.patch
>
>
> Since these classes are writeteable, they need a buffer in order to stage 
> pending changes for efficiency reasons. The issue is that at read-time, the 
> code then needs, for every call to {{get}} to check whether the requested 
> value is in the buffer of pending values or has been packed into main 
> storage, which is inefficient.
> I would like to fix these APIs to separate the writer from the reader, the 
> latter being immutable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5792) Improve our packed *AppendingLongBuffer

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048931#comment-14048931
 ] 

ASF subversion and git services commented on LUCENE-5792:
-

Commit 1607103 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1607103 ]

LUCENE-5792: Simplified *AppendingBuffer APIs.

> Improve our packed *AppendingLongBuffer
> ---
>
> Key: LUCENE-5792
> URL: https://issues.apache.org/jira/browse/LUCENE-5792
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5792.patch, LUCENE-5792.patch, LUCENE-5792.patch
>
>
> Since these classes are writeteable, they need a buffer in order to stage 
> pending changes for efficiency reasons. The issue is that at read-time, the 
> code then needs, for every call to {{get}} to check whether the requested 
> value is in the buffer of pending values or has been packed into main 
> storage, which is inefficient.
> I would like to fix these APIs to separate the writer from the reader, the 
> latter being immutable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6179) ManagedResource repeatedly logs warnings when not used

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048927#comment-14048927
 ] 

ASF subversion and git services commented on SOLR-6179:
---

Commit 1607102 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1607102 ]

SOLR-6179: Better strategy for handling empty managed data to avoid spurious 
warning messages in the logs.

> ManagedResource repeatedly logs warnings when not used
> --
>
> Key: SOLR-6179
> URL: https://issues.apache.org/jira/browse/SOLR-6179
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8, 4.8.1
> Environment: 
>Reporter: Hoss Man
>Assignee: Timothy Potter
>
> These messages are currently logged as WARNings, and should either be 
> switched to INFO level (or made more sophisticated so that it can tell when 
> solr is setup for managed resources but the data isn't available)...
> {noformat}
> 2788 [coreLoadExecutor-5-thread-1] WARN  org.apache.solr.rest.ManagedResource 
>  – No stored data found for /rest/managed
> 2788 [coreLoadExecutor-5-thread-1] WARN  org.apache.solr.rest.ManagedResource 
>  – No registered observers for /rest/managed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5797) improve speed of norms merging

2014-07-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5797.
-

   Resolution: Fixed
Fix Version/s: 4.10

> improve speed of norms merging
> --
>
> Key: LUCENE-5797
> URL: https://issues.apache.org/jira/browse/LUCENE-5797
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 4.10
>
> Attachments: LUCENE-5797.patch
>
>
> Today we use the following procedure:
> * track HashSet uniqueValues, until it exceeds 256 unique values.
> * convert to array, sort and assign ordinals to each one
> * create encoder map (HashMap) to encode each value.
> This results in each value being hashed twice... but the vast majority of the 
> time people will just be using single-byte norms and a simple array is enough 
> for that range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5797) improve speed of norms merging

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048891#comment-14048891
 ] 

ASF subversion and git services commented on LUCENE-5797:
-

Commit 1607080 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1607080 ]

LUCENE-5797: Optimize norms merging

> improve speed of norms merging
> --
>
> Key: LUCENE-5797
> URL: https://issues.apache.org/jira/browse/LUCENE-5797
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 4.10
>
> Attachments: LUCENE-5797.patch
>
>
> Today we use the following procedure:
> * track HashSet uniqueValues, until it exceeds 256 unique values.
> * convert to array, sort and assign ordinals to each one
> * create encoder map (HashMap) to encode each value.
> This results in each value being hashed twice... but the vast majority of the 
> time people will just be using single-byte norms and a simple array is enough 
> for that range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6219) I have a XML document from which I want to get the Element values by using XPATH Predicates. For me its not working. By the way I am testing it in Solr 4.7. Helps are hi

2014-07-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6219.
-

Resolution: Invalid

Please ask usage questions on the solr-user mailing list.

> I have a XML document from which I want to get the Element values by using 
> XPATH Predicates.  For me its not working. By the way I am testing it in Solr 
> 4.7. Helps are highly appreciated. Thanks
> --
>
> Key: SOLR-6219
> URL: https://issues.apache.org/jira/browse/SOLR-6219
> Project: Solr
>  Issue Type: Bug
> Environment: Am using Fedora 14 with java 6.
>Reporter: Balaji
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5797) improve speed of norms merging

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048878#comment-14048878
 ] 

ASF subversion and git services commented on LUCENE-5797:
-

Commit 1607074 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1607074 ]

LUCENE-5797: Optimize norms merging

> improve speed of norms merging
> --
>
> Key: LUCENE-5797
> URL: https://issues.apache.org/jira/browse/LUCENE-5797
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-5797.patch
>
>
> Today we use the following procedure:
> * track HashSet uniqueValues, until it exceeds 256 unique values.
> * convert to array, sort and assign ordinals to each one
> * create encoder map (HashMap) to encode each value.
> This results in each value being hashed twice... but the vast majority of the 
> time people will just be using single-byte norms and a simple array is enough 
> for that range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6154) SolrCloud: facet range option f..facet.mincount=1 omits buckets on response

2014-07-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048872#comment-14048872
 ] 

José Joaquín edited comment on SOLR-6154 at 7/1/14 1:32 PM:


I'm also experiencing the same effect on Solr 4.7.1. In my case, it comes up 
when including two collections in the faceting-by-range query.

When f..facet.mincount= 0, all the buckets are correctly returned.
Otherwise only the entries from one of the collections are being returned.


was (Author: josejoaquín):
I'm also experiencing the same effect on Solr 4.7.1. In my case, it comes up 
when including two collections in the faceting-by-range query.

When f..facet.mincount= 0, all the buckets are correctly returned.
Otherwise only the entries from one of the collection are being returned.

> SolrCloud: facet range option f..facet.mincount=1 omits buckets on 
> response
> --
>
> Key: SOLR-6154
> URL: https://issues.apache.org/jira/browse/SOLR-6154
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.5.1, 4.8.1
> Environment: Solr 4.5.1 under Linux  - explicit id routing
>  Indexed 400,000+ Documents
>  explicit routing 
>  custom schema.xml
>  
> Solr 4.8.1 under Windows+Cygwin
>  Indexed 6 Documents
>  implicit id routing
>  out of the box schema
>Reporter: Ronald Matamoros
> Attachments: HowToReplicate.pdf, data.xml
>
>
> Attached
> - PDF with instructions on how to replicate.
> - data.xml to replicate index
> The f..facet.mincount option on a distributed search gives 
> inconsistent list of buckets on a range facet.
>  
> Experiencing that some buckets are ignored when using the option 
> "f..facet.mincount=1".
> The Solr logs do not indicate any error or warning during execution.
> The debug=true option and increasing the log levels to the FacetComponent do 
> not provide any hints to the behaviour.
> Replicated the issue on both Solr 4.5.1 & 4.8.1.
> Example, 
> Removing the f..facet.mincount=1 option gives the expected list of 
> buckets for the 6 documents matched.
> 
>  
>
>  0
>  1
>  0
>  3
>  0
>  1
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  1
>  0
>  0
>  0
>  0
>
>50.0
>0.0
>1000.0
>0
>0
>2
>  
>
> Using the f..facet.mincount=1 option removes the 0 count buckets but 
> will also omit bucket 
>
>   
> 
> 1
> 3
> 1
>  
>  50.0
>  0.0
>  1000.0
>  0
>  0
>  4
>   
> 
> Resubmitting the query renders a different bucket list 
> (May need to resubmit a couple times)
>
>   
> 
> 3
> 1
>  
>  50.0
>  0.0
>  1000.0
>  0
>  0
>  2
>   
> 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6154) SolrCloud: facet range option f..facet.mincount=1 omits buckets on response

2014-07-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048872#comment-14048872
 ] 

José Joaquín commented on SOLR-6154:


I'm also experiencing the same effect on Solr 4.7.1. In my case, it comes up 
when including two collections in the faceting-by-range query.

When f..facet.mincount= 0, all the buckets are correctly returned.
Otherwise only the entries from one of the collection are being returned.

> SolrCloud: facet range option f..facet.mincount=1 omits buckets on 
> response
> --
>
> Key: SOLR-6154
> URL: https://issues.apache.org/jira/browse/SOLR-6154
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.5.1, 4.8.1
> Environment: Solr 4.5.1 under Linux  - explicit id routing
>  Indexed 400,000+ Documents
>  explicit routing 
>  custom schema.xml
>  
> Solr 4.8.1 under Windows+Cygwin
>  Indexed 6 Documents
>  implicit id routing
>  out of the box schema
>Reporter: Ronald Matamoros
> Attachments: HowToReplicate.pdf, data.xml
>
>
> Attached
> - PDF with instructions on how to replicate.
> - data.xml to replicate index
> The f..facet.mincount option on a distributed search gives 
> inconsistent list of buckets on a range facet.
>  
> Experiencing that some buckets are ignored when using the option 
> "f..facet.mincount=1".
> The Solr logs do not indicate any error or warning during execution.
> The debug=true option and increasing the log levels to the FacetComponent do 
> not provide any hints to the behaviour.
> Replicated the issue on both Solr 4.5.1 & 4.8.1.
> Example, 
> Removing the f..facet.mincount=1 option gives the expected list of 
> buckets for the 6 documents matched.
> 
>  
>
>  0
>  1
>  0
>  3
>  0
>  1
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  1
>  0
>  0
>  0
>  0
>
>50.0
>0.0
>1000.0
>0
>0
>2
>  
>
> Using the f..facet.mincount=1 option removes the 0 count buckets but 
> will also omit bucket 
>
>   
> 
> 1
> 3
> 1
>  
>  50.0
>  0.0
>  1000.0
>  0
>  0
>  4
>   
> 
> Resubmitting the query renders a different bucket list 
> (May need to resubmit a couple times)
>
>   
> 
> 3
> 1
>  
>  50.0
>  0.0
>  1000.0
>  0
>  0
>  2
>   
> 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Language detection for solr 3.6.1

2014-07-01 Thread Poornima Jay
Hi,

Can anyone please let me know how to integrate 
http://code.google.com/p/language-detection/ in solr 3.6.1. I want four 
languages (English, chinese simplified, chinese traditional, Japanes, and 
Korean) to be added in one schema ie. multilingual search from single schema 
file.

I tried added solr-langdetect-3.5.0.jar in my /solr/contrib/langid/lib/ 
location and in /webapps/solr/WEB-INF/contrib/langid/lib/ and made changes in 
the solrconfig.xml as below



 
    
    
    content_eng    
    true
    content_eng,content_ja
    en,ja
    en:english ja:japanese
    en
    
    
  
  
  
    
    langid
    
  

Please suggest me the solution.

Thanks,
Poornima

[jira] [Commented] (LUCENE-5794) Add a slow random-access ords wrapper

2014-07-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048855#comment-14048855
 ] 

Robert Muir commented on LUCENE-5794:
-

Alternatively we could just fix Memory to support random access.

> Add a slow random-access ords wrapper
> -
>
> Key: LUCENE-5794
> URL: https://issues.apache.org/jira/browse/LUCENE-5794
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5794.patch
>
>
> Even if you are using an algorithm that requires random access (eg. sorting 
> based on the maximum value), it might still be ok to allow for it 
> occasionally on a codec that doesn't support random access like 
> MemoryDocValuesFormat by having a slow random-access wrapper. This slow 
> wrapper would need to be enabled explicitely. This would allow to have 
> algorithms that are optimized on random-access codecs but still work in the 
> general case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5794) Add a slow random-access ords wrapper

2014-07-01 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048851#comment-14048851
 ] 

Adrien Grand commented on LUCENE-5794:
--

Yes. If you have an algorithm that works better on random-access ords, I'd like 
to write it using the RandomAccessOrds class and then still make it usable on 
codecs that don't have random-access ords by using this slow wrapper. If your 
documents only have a couple of values, this is probably OK anyway. I think we 
just need to make sure this wrapping is explicit in order to avoid worst-cases?


> Add a slow random-access ords wrapper
> -
>
> Key: LUCENE-5794
> URL: https://issues.apache.org/jira/browse/LUCENE-5794
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5794.patch
>
>
> Even if you are using an algorithm that requires random access (eg. sorting 
> based on the maximum value), it might still be ok to allow for it 
> occasionally on a codec that doesn't support random access like 
> MemoryDocValuesFormat by having a slow random-access wrapper. This slow 
> wrapper would need to be enabled explicitely. This would allow to have 
> algorithms that are optimized on random-access codecs but still work in the 
> general case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5799) speed up DocValuesConsumer.mergeNumericField

2014-07-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5799.
-

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

> speed up DocValuesConsumer.mergeNumericField
> 
>
> Key: LUCENE-5799
> URL: https://issues.apache.org/jira/browse/LUCENE-5799
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5799.patch
>
>
> This method (used for both numeric docvalues and norms) is a little slow:
> * does some boxing for no good reason (can just use a boolean instead)
> * checks docsWithField always, instead of only when value == 0. This can 
> cause unnecessary i/o.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5799) speed up DocValuesConsumer.mergeNumericField

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048849#comment-14048849
 ] 

ASF subversion and git services commented on LUCENE-5799:
-

Commit 1607067 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1607067 ]

LUCENE-5799: optimize numeric docvalues merging

> speed up DocValuesConsumer.mergeNumericField
> 
>
> Key: LUCENE-5799
> URL: https://issues.apache.org/jira/browse/LUCENE-5799
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-5799.patch
>
>
> This method (used for both numeric docvalues and norms) is a little slow:
> * does some boxing for no good reason (can just use a boolean instead)
> * checks docsWithField always, instead of only when value == 0. This can 
> cause unnecessary i/o.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4757 - Still Failing

2014-07-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4757/

1 tests failed.
FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
posix_spawn is not a supported process launch mechanism on this platform.

Stack Trace:
java.lang.Error: posix_spawn is not a supported process launch mechanism on 
this platform.
at 
__randomizedtesting.SeedInfo.seed([AC10FFD0123831FF:410E44EA9935307B]:0)
at java.lang.UNIXProcess$1.run(UNIXProcess.java:111)
at java.lang.UNIXProcess$1.run(UNIXProcess.java:93)
at java.security.AccessController.doPrivileged(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:91)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
at java.lang.Runtime.exec(Runtime.java:617)
at java.lang.Runtime.exec(Runtime.java:450)
at java.lang.Runtime.exec(Runtime.java:347)
at 
org.apache.solr.handler.admin.SystemInfoHandler.execute(SystemInfoHandler.java:220)
at 
org.apache.solr.handler.admin.SystemInfoHandler.getSystemInfo(SystemInfoHandler.java:176)
at 
org.apache.solr.handler.admin.SystemInfoHandler.handleRequestBody(SystemInfoHandler.java:97)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1980)
at org.apache.solr.util.TestHarness.query(TestHarness.java:295)
at org.apache.solr.util.TestHarness.query(TestHarness.java:278)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:693)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Commented] (LUCENE-5799) speed up DocValuesConsumer.mergeNumericField

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048843#comment-14048843
 ] 

ASF subversion and git services commented on LUCENE-5799:
-

Commit 1607065 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1607065 ]

LUCENE-5799: optimize numeric docvalues merging

> speed up DocValuesConsumer.mergeNumericField
> 
>
> Key: LUCENE-5799
> URL: https://issues.apache.org/jira/browse/LUCENE-5799
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-5799.patch
>
>
> This method (used for both numeric docvalues and norms) is a little slow:
> * does some boxing for no good reason (can just use a boolean instead)
> * checks docsWithField always, instead of only when value == 0. This can 
> cause unnecessary i/o.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-01 Thread Nicola Buso (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048839#comment-14048839
 ] 

Nicola Buso commented on LUCENE-5801:
-

Thanks Shai, I will follow you indications and later I will reintroduce also 
TaxonomyMergeUtils.

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-01 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048831#comment-14048831
 ] 

Shai Erera commented on LUCENE-5801:


Thanks for opening this, we definitely need to bring this class back. While 
you're at it, notice that there is a matching test as well as 
TaxonomyMergeUtils which did some work for the user ... we should consider 
returning them as well, but this time under o.a.l.facet.taxonomy.utils, as they 
are specific to the taxonomy index.

I think one change we should make is somehow expose FacetsConfig.dedupAndEncode 
as a static method, so you can encode the new ordinals and then:

* Take an OrdinalsReader in the ctor (optional, 2nd ctor) in case the app used 
custom encoding (default to DocValuesOrdinalsReader).
* Have a protected method dedupAndEncode, like FacetsConfig, default to 
FacetsConfig.dedupAndEncode and allow the app to override with its own custom 
encoding.

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5797) improve speed of norms merging

2014-07-01 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048819#comment-14048819
 ] 

Adrien Grand edited comment on LUCENE-5797 at 7/1/14 12:54 PM:
---

The patch looks good to me. I think the complexity is ok, I was just a bit 
confused why the size was stored as a short when looking at NormsMap out of 
context, maybe we could just have a comment about this limitation?


was (Author: jpountz):
The patch looks good to me. I think the complexity is ok, I was just a bit 
confused why the size was stored as a long when looking at NormsMap out of 
context, maybe we could just have a comment about this limitation?

> improve speed of norms merging
> --
>
> Key: LUCENE-5797
> URL: https://issues.apache.org/jira/browse/LUCENE-5797
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-5797.patch
>
>
> Today we use the following procedure:
> * track HashSet uniqueValues, until it exceeds 256 unique values.
> * convert to array, sort and assign ordinals to each one
> * create encoder map (HashMap) to encode each value.
> This results in each value being hashed twice... but the vast majority of the 
> time people will just be using single-byte norms and a simple array is enough 
> for that range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5797) improve speed of norms merging

2014-07-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048829#comment-14048829
 ] 

Robert Muir commented on LUCENE-5797:
-

I'll try to add an assert as well.

> improve speed of norms merging
> --
>
> Key: LUCENE-5797
> URL: https://issues.apache.org/jira/browse/LUCENE-5797
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-5797.patch
>
>
> Today we use the following procedure:
> * track HashSet uniqueValues, until it exceeds 256 unique values.
> * convert to array, sort and assign ordinals to each one
> * create encoder map (HashMap) to encode each value.
> This results in each value being hashed twice... but the vast majority of the 
> time people will just be using single-byte norms and a simple array is enough 
> for that range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5798) minor optimizations to MultiDocs(AndPositions)Enum.reset()

2014-07-01 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048827#comment-14048827
 ] 

Adrien Grand commented on LUCENE-5798:
--

+1

> minor optimizations to MultiDocs(AndPositions)Enum.reset()
> --
>
> Key: LUCENE-5798
> URL: https://issues.apache.org/jira/browse/LUCENE-5798
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5798.patch
>
>
> This method is called by merging for each term, potentially many times, but 
> only returning a few docs for each invocation (e.g. imagine high cardinality 
> fields, unique id fields, normal zipf distribution on full text).
> Today we create a new EnumWithSlice[] array and new EnumWithSlice entry for 
> each term, but this creates a fair amount of unnecessary garbage: instead we 
> can just make this array up-front as size subReaderCount and reuse it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5799) speed up DocValuesConsumer.mergeNumericField

2014-07-01 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048825#comment-14048825
 ] 

Adrien Grand commented on LUCENE-5799:
--

+1

> speed up DocValuesConsumer.mergeNumericField
> 
>
> Key: LUCENE-5799
> URL: https://issues.apache.org/jira/browse/LUCENE-5799
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-5799.patch
>
>
> This method (used for both numeric docvalues and norms) is a little slow:
> * does some boxing for no good reason (can just use a boolean instead)
> * checks docsWithField always, instead of only when value == 0. This can 
> cause unnecessary i/o.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5797) improve speed of norms merging

2014-07-01 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048819#comment-14048819
 ] 

Adrien Grand commented on LUCENE-5797:
--

The patch looks good to me. I think the complexity is ok, I was just a bit 
confused why the size was stored as a long when looking at NormsMap out of 
context, maybe we could just have a comment about this limitation?

> improve speed of norms merging
> --
>
> Key: LUCENE-5797
> URL: https://issues.apache.org/jira/browse/LUCENE-5797
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-5797.patch
>
>
> Today we use the following procedure:
> * track HashSet uniqueValues, until it exceeds 256 unique values.
> * convert to array, sort and assign ordinals to each one
> * create encoder map (HashMap) to encode each value.
> This results in each value being hashed twice... but the vast majority of the 
> time people will just be using single-byte norms and a simple array is enough 
> for that range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2014-07-01 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-4258:
---

Fix Version/s: (was: 4.9)
   (was: 5.0)

> Incremental Field Updates through Stacked Segments
> --
>
> Key: LUCENE-4258
> URL: https://issues.apache.org/jira/browse/LUCENE-4258
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Sivan Yogev
> Attachments: IncrementalFieldUpdates.odp, 
> LUCENE-4258-API-changes.patch, LUCENE-4258.branch.1.patch, 
> LUCENE-4258.branch.2.patch, LUCENE-4258.branch.4.patch, 
> LUCENE-4258.branch.5.patch, LUCENE-4258.branch.6.patch, 
> LUCENE-4258.branch.6.patch, LUCENE-4258.branch3.patch, 
> LUCENE-4258.r1410593.patch, LUCENE-4258.r1412262.patch, 
> LUCENE-4258.r1416438.patch, LUCENE-4258.r1416617.patch, 
> LUCENE-4258.r1422495.patch, LUCENE-4258.r1423010.patch
>
>   Original Estimate: 2,520h
>  Remaining Estimate: 2,520h
>
> Shai and I would like to start working on the proposal to Incremental Field 
> Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 88910 - Failure!

2014-07-01 Thread Robert Muir
one of these guys (either that stringunion automaton, or the "naive"
one), isnt really deterministic and has transitions to dead states.

On Tue, Jul 1, 2014 at 8:20 AM,   wrote:
> Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/88910/
>
> 1 tests failed.
> REGRESSION:  org.apache.lucene.util.automaton.TestOperations.testStringUnion
>
> Error Message:
>
>
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([5B10EDB8D55580F8:57D561ED8F9CA461]:0)
> at 
> org.apache.lucene.util.automaton.Operations.subsetOf(Operations.java:414)
> at 
> org.apache.lucene.util.automaton.Operations.sameLanguage(Operations.java:367)
> at 
> org.apache.lucene.util.automaton.TestOperations.testStringUnion(TestOperations.java:37)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> Build Log:
> [...truncated 263 lines...]
>[junit4] Suite: org.apache.lucene.util.automaton.TestOperations
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=Test

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 88910 - Failure!

2014-07-01 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/88910/

1 tests failed.
REGRESSION:  org.apache.lucene.util.automaton.TestOperations.testStringUnion

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([5B10EDB8D55580F8:57D561ED8F9CA461]:0)
at 
org.apache.lucene.util.automaton.Operations.subsetOf(Operations.java:414)
at 
org.apache.lucene.util.automaton.Operations.sameLanguage(Operations.java:367)
at 
org.apache.lucene.util.automaton.TestOperations.testStringUnion(TestOperations.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 263 lines...]
   [junit4] Suite: org.apache.lucene.util.automaton.TestOperations
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestOperations 
-Dtests.method=testStringUnion -Dtests.seed=5B10EDB8D55580F8 -Dtests.slow=true 
-Dtests.locale=en_SG -Dtests.timezone=CTT -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.21s J4 | TestOperations.testStringUnion <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5B10EDB8D55580F8:5

[jira] [Resolved] (LUCENE-5786) Unflushed/ truncated events file (hung testing subprocess)

2014-07-01 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5786.
-

Resolution: Fixed

I upgraded to RR 2.1.6; it should halt the forked JVM on any communication with 
the master now so no more hangs, hopefully.

> Unflushed/ truncated events file (hung testing subprocess)
> --
>
> Key: LUCENE-5786
> URL: https://issues.apache.org/jira/browse/LUCENE-5786
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Fix For: 5.0, 4.10
>
>
> This has happened several times on Jenkins, typically on 
> SSLMigrationTest.testDistribSearch, but probably on other tests as well.
> The symptom is: the test framework never terminates, it also reports an 
> incorrect (?) hung test.
> The problem is that the actual forked JVM is hung on reading stdin, waiting 
> for the next test suite (no test thread is present); the master process is 
> hung on receiving data from the forked jvm (both the events file and stdout 
> spill is truncated in the middle of a test). The last output is:
> {code}
> [
>   "APPEND_STDERR",
>   {
> "chunk": "612639 T30203 oasu.DefaultSolrCoreState.doRecovery Running 
> recovery - first canceling any ongoing recovery%0A"
>   }
> ]
> [
>   "APPEND_STDERR"
> {code}
> Overall, it looks insane -- there are flushes after each test completes 
> (normally or not), there are tests *following* the one that last reported 
> output and before dynamic suites on stdin. 
> I have no idea. The best explanation is insane -- looks like the test thread 
> just died in the middle of executing Java code...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5786) Unflushed/ truncated events file (hung testing subprocess)

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048792#comment-14048792
 ] 

ASF subversion and git services commented on LUCENE-5786:
-

Commit 1607060 from [~dawidweiss] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1607060 ]

LUCENE-5786: Unflushed/ truncated events file (hung testing subprocess). 
Updating RR to 2.1.6

> Unflushed/ truncated events file (hung testing subprocess)
> --
>
> Key: LUCENE-5786
> URL: https://issues.apache.org/jira/browse/LUCENE-5786
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Fix For: 5.0, 4.10
>
>
> This has happened several times on Jenkins, typically on 
> SSLMigrationTest.testDistribSearch, but probably on other tests as well.
> The symptom is: the test framework never terminates, it also reports an 
> incorrect (?) hung test.
> The problem is that the actual forked JVM is hung on reading stdin, waiting 
> for the next test suite (no test thread is present); the master process is 
> hung on receiving data from the forked jvm (both the events file and stdout 
> spill is truncated in the middle of a test). The last output is:
> {code}
> [
>   "APPEND_STDERR",
>   {
> "chunk": "612639 T30203 oasu.DefaultSolrCoreState.doRecovery Running 
> recovery - first canceling any ongoing recovery%0A"
>   }
> ]
> [
>   "APPEND_STDERR"
> {code}
> Overall, it looks insane -- there are flushes after each test completes 
> (normally or not), there are tests *following* the one that last reported 
> output and before dynamic suites on stdin. 
> I have no idea. The best explanation is insane -- looks like the test thread 
> just died in the middle of executing Java code...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5786) Unflushed/ truncated events file (hung testing subprocess)

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048790#comment-14048790
 ] 

ASF subversion and git services commented on LUCENE-5786:
-

Commit 1607058 from [~dawidweiss] in branch 'dev/trunk'
[ https://svn.apache.org/r1607058 ]

LUCENE-5786: Unflushed/ truncated events file (hung testing subprocess). 
Updating RR to 2.1.6

> Unflushed/ truncated events file (hung testing subprocess)
> --
>
> Key: LUCENE-5786
> URL: https://issues.apache.org/jira/browse/LUCENE-5786
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Fix For: 5.0, 4.10
>
>
> This has happened several times on Jenkins, typically on 
> SSLMigrationTest.testDistribSearch, but probably on other tests as well.
> The symptom is: the test framework never terminates, it also reports an 
> incorrect (?) hung test.
> The problem is that the actual forked JVM is hung on reading stdin, waiting 
> for the next test suite (no test thread is present); the master process is 
> hung on receiving data from the forked jvm (both the events file and stdout 
> spill is truncated in the middle of a test). The last output is:
> {code}
> [
>   "APPEND_STDERR",
>   {
> "chunk": "612639 T30203 oasu.DefaultSolrCoreState.doRecovery Running 
> recovery - first canceling any ongoing recovery%0A"
>   }
> ]
> [
>   "APPEND_STDERR"
> {code}
> Overall, it looks insane -- there are flushes after each test completes 
> (normally or not), there are tests *following* the one that last reported 
> output and before dynamic suites on stdin. 
> I have no idea. The best explanation is insane -- looks like the test thread 
> just died in the middle of executing Java code...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5798) minor optimizations to MultiDocs(AndPositions)Enum.reset()

2014-07-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5798.
-

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

> minor optimizations to MultiDocs(AndPositions)Enum.reset()
> --
>
> Key: LUCENE-5798
> URL: https://issues.apache.org/jira/browse/LUCENE-5798
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5798.patch
>
>
> This method is called by merging for each term, potentially many times, but 
> only returning a few docs for each invocation (e.g. imagine high cardinality 
> fields, unique id fields, normal zipf distribution on full text).
> Today we create a new EnumWithSlice[] array and new EnumWithSlice entry for 
> each term, but this creates a fair amount of unnecessary garbage: instead we 
> can just make this array up-front as size subReaderCount and reuse it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5798) minor optimizations to MultiDocs(AndPositions)Enum.reset()

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048784#comment-14048784
 ] 

ASF subversion and git services commented on LUCENE-5798:
-

Commit 1607055 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1607055 ]

LUCENE-5798: Optimize MultiDocsEnum reuse

> minor optimizations to MultiDocs(AndPositions)Enum.reset()
> --
>
> Key: LUCENE-5798
> URL: https://issues.apache.org/jira/browse/LUCENE-5798
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5798.patch
>
>
> This method is called by merging for each term, potentially many times, but 
> only returning a few docs for each invocation (e.g. imagine high cardinality 
> fields, unique id fields, normal zipf distribution on full text).
> Today we create a new EnumWithSlice[] array and new EnumWithSlice entry for 
> each term, but this creates a fair amount of unnecessary garbage: instead we 
> can just make this array up-front as size subReaderCount and reuse it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5798) minor optimizations to MultiDocs(AndPositions)Enum.reset()

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048776#comment-14048776
 ] 

ASF subversion and git services commented on LUCENE-5798:
-

Commit 1607049 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1607049 ]

LUCENE-5798: Optimize MultiDocsEnum reuse

> minor optimizations to MultiDocs(AndPositions)Enum.reset()
> --
>
> Key: LUCENE-5798
> URL: https://issues.apache.org/jira/browse/LUCENE-5798
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-5798.patch
>
>
> This method is called by merging for each term, potentially many times, but 
> only returning a few docs for each invocation (e.g. imagine high cardinality 
> fields, unique id fields, normal zipf distribution on full text).
> Today we create a new EnumWithSlice[] array and new EnumWithSlice entry for 
> each term, but this creates a fair amount of unnecessary garbage: instead we 
> can just make this array up-front as size subReaderCount and reuse it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5786) Unflushed/ truncated events file (hung testing subprocess)

2014-07-01 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5786:


Fix Version/s: 4.10
   5.0

> Unflushed/ truncated events file (hung testing subprocess)
> --
>
> Key: LUCENE-5786
> URL: https://issues.apache.org/jira/browse/LUCENE-5786
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Fix For: 5.0, 4.10
>
>
> This has happened several times on Jenkins, typically on 
> SSLMigrationTest.testDistribSearch, but probably on other tests as well.
> The symptom is: the test framework never terminates, it also reports an 
> incorrect (?) hung test.
> The problem is that the actual forked JVM is hung on reading stdin, waiting 
> for the next test suite (no test thread is present); the master process is 
> hung on receiving data from the forked jvm (both the events file and stdout 
> spill is truncated in the middle of a test). The last output is:
> {code}
> [
>   "APPEND_STDERR",
>   {
> "chunk": "612639 T30203 oasu.DefaultSolrCoreState.doRecovery Running 
> recovery - first canceling any ongoing recovery%0A"
>   }
> ]
> [
>   "APPEND_STDERR"
> {code}
> Overall, it looks insane -- there are flushes after each test completes 
> (normally or not), there are tests *following* the one that last reported 
> output and before dynamic suites on stdin. 
> I have no idea. The best explanation is insane -- looks like the test thread 
> just died in the middle of executing Java code...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6219) I have a XML document from which I want to get the Element values by using XPATH Predicates. For me its not working. By the way I am testing it in Solr 4.7. Helps are hig

2014-07-01 Thread Balaji (JIRA)
Balaji created SOLR-6219:


 Summary: I have a XML document from which I want to get the 
Element values by using XPATH Predicates.  For me its not working. By the way I 
am testing it in Solr 4.7. Helps are highly appreciated. Thanks
 Key: SOLR-6219
 URL: https://issues.apache.org/jira/browse/SOLR-6219
 Project: Solr
  Issue Type: Bug
 Environment: Am using Fedora 14 with java 6.
Reporter: Balaji






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #648: POMs out of sync

2014-07-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/648/

No tests ran.

Build Log:
[...truncated 38842 lines...]
-validate-maven-dependencies:
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-codecs:4.10-SNAPSHOT: checking for updates from 
sonatype.releases
[artifact:dependencies] [WARNING] *** CHECKSUM FAILED - Checksum failed on 
download: local = '780ba3cf6b6eb0f7c9f6d41d8d25a86a2f46b0c4'; remote = '
[artifact:dependencies] 301' - RETRYING
[artifact:dependencies] [WARNING] *** CHECKSUM FAILED - Checksum failed on 
download: local = '780ba3cf6b6eb0f7c9f6d41d8d25a86a2f46b0c4'; remote = '
[artifact:dependencies] 301' - IGNORING
[artifact:dependencies] An error has occurred while processing the Maven 
artifact tasks.

[...truncated 18 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:483: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:174: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/lucene/build.xml:512:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/lucene/common-build.xml:1531:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/lucene/common-build.xml:560:
 Unable to resolve artifact: Unable to get dependency information: Unable to 
read the metadata file for artifact 'org.apache.lucene:lucene-codecs:jar': 
Error getting POM for 'org.apache.lucene:lucene-codecs' from the repository: 
Unable to read local copy of metadata: Cannot read metadata from 
'/home/hudson/.m2/repository/org/apache/lucene/lucene-codecs/4.10-SNAPSHOT/maven-metadata-sonatype.releases.xml':
 end tag name  must match start tag name  from line 5 (position: 
TEXT seen ...\r\n... @6:8) 
  org.apache.lucene:lucene-codecs:pom:4.10-SNAPSHOT


 for project org.apache.lucene:lucene-codecs
  org.apache.lucene:lucene-codecs:jar:4.10-SNAPSHOT

from the specified remote repositories:
  central (http://repo1.maven.org/maven2),
  sonatype.releases (http://oss.sonatype.org/content/repositories/releases),
  Nexus (http://repository.apache.org/snapshots)

Path to dependency: 
1) org.apache.lucene:lucene-test-framework:jar:4.10-SNAPSHOT



Total time: 29 minutes 21 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5792) Improve our packed *AppendingLongBuffer

2014-07-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048711#comment-14048711
 ] 

Robert Muir commented on LUCENE-5792:
-

Thanks Adrien!

+1 to commit.

Maybe when committing, you can see if those same iterators can now be static? 
I'm not sure they refer to anything in the parent class anymore.

> Improve our packed *AppendingLongBuffer
> ---
>
> Key: LUCENE-5792
> URL: https://issues.apache.org/jira/browse/LUCENE-5792
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5792.patch, LUCENE-5792.patch, LUCENE-5792.patch
>
>
> Since these classes are writeteable, they need a buffer in order to stage 
> pending changes for efficiency reasons. The issue is that at read-time, the 
> code then needs, for every call to {{get}} to check whether the requested 
> value is in the buffer of pending values or has been packed into main 
> storage, which is inefficient.
> I would like to fix these APIs to separate the writer from the reader, the 
> latter being immutable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Why not use MappedByteBuffer as IndexOutput for MMapDirectory ?

2014-07-01 Thread Yanjiu Huang

Hi,

Maybe there is some misunderstand about MMapDirectory and Directory for 
me. But when I go through the MMapDirectory source code, I find that the 
IndexOutput for it uses which wraps the FileOutputStream. Since we get 
the MappedByteBuffer when we try to read the index file, Can we shared 
that ByteBuffer to make write operation more efficient ?
I know the sharing between write and read operations would make it more 
complex, but I just want to whether it is better option for MMapDirectory.


If there is some problem, please feel free to tell me. Thank you.

BR.
Yanjiu

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5800) When using NRTCachingDirectory, if the new created segment is in compound format, it is always created in cache(RAMDirectory). It will cause large segments referenced b

2014-07-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5800.
-

Resolution: Duplicate

See LUCENE-5724

> When using NRTCachingDirectory, if the new created segment is in compound 
> format, it is always created in cache(RAMDirectory). It will cause large 
> segments referenced by IndexSearcher in memory. 
> ---
>
> Key: LUCENE-5800
> URL: https://issues.apache.org/jira/browse/LUCENE-5800
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Reporter: Zhijiang Wang
>
> When using NRTCachingDirectory, if the new created segment is in compound 
> format,  in the method of createOutput(String name,IOContext context), the 
> real context is not used and uses IOContext.DEFAULT instead. So the 
> estimatedMergeBytes or estimatedSegmentSize will always be smaller than 
> maxMergeSizeBytes,maxCachedBytes, resulting in new created compound segment 
> is always in cache(RAMDirectory).  And these new large segments created by 
> merging will be referenced by ReaderPool in the IndexWriter when using NRT 
> feature, resulting in much memory used in process and actually these 
> referenced large segments has already been sync to disk when commit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4756 - Still Failing

2014-07-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4756/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=2605, 
name=Thread-1307, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup]  
   at java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=2605, name=Thread-1307, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)
at __randomizedtesting.SeedInfo.seed([2DB5BF749262FFEC]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=2605, name=Thread-1307, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=2605, name=Thread-1307, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.ne

[jira] [Created] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-01 Thread Nicola Buso (JIRA)
Nicola Buso created LUCENE-5801:
---

 Summary: Resurrect 
org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso


from lucene > 4.6.1 the class:
org.apache.lucene.facet.util.OrdinalMappingAtomicReader

was removed; resurrect it because used merging indexes related to merged 
taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5800) When using NRTCachingDirectory, if the new created segment is in compound format, it is always created in cache(RAMDirectory). It will cause large segments referenced by

2014-07-01 Thread Zhijiang Wang (JIRA)
Zhijiang Wang created LUCENE-5800:
-

 Summary: When using NRTCachingDirectory, if the new created 
segment is in compound format, it is always created in cache(RAMDirectory). It 
will cause large segments referenced by IndexSearcher in memory. 
 Key: LUCENE-5800
 URL: https://issues.apache.org/jira/browse/LUCENE-5800
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Zhijiang Wang


When using NRTCachingDirectory, if the new created segment is in compound 
format,  in the method of createOutput(String name,IOContext context), the real 
context is not used and uses IOContext.DEFAULT instead. So the 
estimatedMergeBytes or estimatedSegmentSize will always be smaller than 
maxMergeSizeBytes,maxCachedBytes, resulting in new created compound segment is 
always in cache(RAMDirectory).  And these new large segments created by merging 
will be referenced by ReaderPool in the IndexWriter when using NRT feature, 
resulting in much memory used in process and actually these referenced large 
segments has already been sync to disk when commit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5792) Improve our packed *AppendingLongBuffer

2014-07-01 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5792:
-

Attachment: LUCENE-5792.patch

Good point, here is an updated patch.

> Improve our packed *AppendingLongBuffer
> ---
>
> Key: LUCENE-5792
> URL: https://issues.apache.org/jira/browse/LUCENE-5792
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5792.patch, LUCENE-5792.patch, LUCENE-5792.patch
>
>
> Since these classes are writeteable, they need a buffer in order to stage 
> pending changes for efficiency reasons. The issue is that at read-time, the 
> code then needs, for every call to {{get}} to check whether the requested 
> value is in the buffer of pending values or has been packed into main 
> storage, which is inefficient.
> I would like to fix these APIs to separate the writer from the reader, the 
> latter being immutable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6159) cancelElection fails on uninitialized ElectionContext

2014-07-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048585#comment-14048585
 ] 

ASF subversion and git services commented on SOLR-6159:
---

Commit 1606997 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1606997 ]

SOLR-6159: A ZooKeeper session expiry during setup can keep LeaderElector from 
joining elections

> cancelElection fails on uninitialized ElectionContext
> -
>
> Key: SOLR-6159
> URL: https://issues.apache.org/jira/browse/SOLR-6159
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.8.1
>Reporter: Steven Bower
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 5.0, 4.10
>
> Attachments: SOLR-6159.patch, SOLR-6159.patch
>
>
> I had a solr collection that basically was out of memory (no exception, just 
> continuous 80-90 second full GCs). This of course is not a good state, but 
> when in this state ever time you come out of the GC your zookeeper session 
> has expired causing all kinds of havoc. Anyway I found bug in the condition 
> where during LeaderElector.setup() if you get a Zookeeper error the 
> LeaderElector.context get set to a context that is not fully initialized (ie 
> hasn't called joinElection..
> Anyway once this happens the node can no longer attempt to join elections 
> because every time the LeaderElector attempts to call cancelElection() on the 
> previous ElectionContext..
> Some logs below and I've attached a patch that does 2 things:
> * Move the setting of LeaderElector.context in the setup call to then of the 
> call so it is only set if the setup completes.
> * Added a check to see if leaderSeqPath is null in 
> ElectionContext.cancelElection
> * Made leaderSeqPath volatile as it is being directly accessed by multiple 
> threads.
> * set LeaderElector.context = null when joinElection fails
> There may be other issues.. the patch is focused on breaking the failure loop 
> that occurs when initialization of the ElectionContext fails.
> {noformat}
> 2014-06-08 23:14:57.805 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
> Opening socket connection to server host1/10.122.142.31:1234. Will not 
> attempt to authenticate using SASL (unknown error)
> 2014-06-08 23:14:57.806 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
> Socket connection established to host1/10.122.142.31:1234, initiating session
> 2014-06-08 23:14:57.810 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
> Unable to reconnect to ZooKeeper service, session 0x2467d956c8d0446 has 
> expired, closing socket connection
> 2014-06-08 23:14:57.816 INFO  ConnectionManager [main-EventThread] - Watcher 
> org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1 
> name:ZooKeeperConnection 
> Watcher:host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
>  got event WatchedEvent state:Expired type:None path:null path:null type:None
> 2014-06-08 23:14:57.817 INFO  ConnectionManager [main-EventThread] - Our 
> previous ZooKeeper session was expired. Attempting to reconnect to recover 
> relationship with ZooKeeper...
> 2014-06-08 23:14:57.817 INFO  DefaultConnectionStrategy [main-EventThread] - 
> Connection expired - starting a new one...
> 2014-06-08 23:14:57.817 INFO  ZooKeeper [main-EventThread] - Initiating 
> client connection, 
> connectString=host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
>  sessionTimeout=15000 
> watcher=org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1
> 2014-06-08 23:14:57.857 INFO  ConnectionManager [main-EventThread] - Waiting 
> for client to connect to ZooKeeper
> 2014-06-08 23:14:57.859 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
> Opening socket connection to server host4/172.17.14.107:1234. Will not 
> attempt to authenticate using SASL (unknown error)
> 2014-06-08 23:14:57.891 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
> Socket connection established to host4/172.17.14.107:1234, initiating session
> 2014-06-08 23:14:57.906 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
> Session establishment complete on server host4/172.17.14.107:1234, sessionid 
> = 0x4467d8d79260486, negotiated timeout = 15000
> 2014-06-08 23:14:57.907 INFO  ConnectionManager [main-EventThread] - Watcher 
> org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1 
> name:ZooKeeperConnection 
> Watcher:host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
>  got event WatchedEvent state:SyncConnected type:None path:null path:null 
> type:None
> 2014-06-08 23:14:57.909 INFO  ConnectionManager [main-EventThread] - Client 
> is connected to ZooKeeper
> 2014-06-08 23:14:57.909 INFO  ConnectionManager [main-EventThread] - 
> Connection 

[jira] [Resolved] (SOLR-6159) cancelElection fails on uninitialized ElectionContext

2014-07-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6159.
-

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

Thanks Steven!

> cancelElection fails on uninitialized ElectionContext
> -
>
> Key: SOLR-6159
> URL: https://issues.apache.org/jira/browse/SOLR-6159
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.8.1
>Reporter: Steven Bower
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 5.0, 4.10
>
> Attachments: SOLR-6159.patch, SOLR-6159.patch
>
>
> I had a solr collection that basically was out of memory (no exception, just 
> continuous 80-90 second full GCs). This of course is not a good state, but 
> when in this state ever time you come out of the GC your zookeeper session 
> has expired causing all kinds of havoc. Anyway I found bug in the condition 
> where during LeaderElector.setup() if you get a Zookeeper error the 
> LeaderElector.context get set to a context that is not fully initialized (ie 
> hasn't called joinElection..
> Anyway once this happens the node can no longer attempt to join elections 
> because every time the LeaderElector attempts to call cancelElection() on the 
> previous ElectionContext..
> Some logs below and I've attached a patch that does 2 things:
> * Move the setting of LeaderElector.context in the setup call to then of the 
> call so it is only set if the setup completes.
> * Added a check to see if leaderSeqPath is null in 
> ElectionContext.cancelElection
> * Made leaderSeqPath volatile as it is being directly accessed by multiple 
> threads.
> * set LeaderElector.context = null when joinElection fails
> There may be other issues.. the patch is focused on breaking the failure loop 
> that occurs when initialization of the ElectionContext fails.
> {noformat}
> 2014-06-08 23:14:57.805 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
> Opening socket connection to server host1/10.122.142.31:1234. Will not 
> attempt to authenticate using SASL (unknown error)
> 2014-06-08 23:14:57.806 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
> Socket connection established to host1/10.122.142.31:1234, initiating session
> 2014-06-08 23:14:57.810 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
> Unable to reconnect to ZooKeeper service, session 0x2467d956c8d0446 has 
> expired, closing socket connection
> 2014-06-08 23:14:57.816 INFO  ConnectionManager [main-EventThread] - Watcher 
> org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1 
> name:ZooKeeperConnection 
> Watcher:host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
>  got event WatchedEvent state:Expired type:None path:null path:null type:None
> 2014-06-08 23:14:57.817 INFO  ConnectionManager [main-EventThread] - Our 
> previous ZooKeeper session was expired. Attempting to reconnect to recover 
> relationship with ZooKeeper...
> 2014-06-08 23:14:57.817 INFO  DefaultConnectionStrategy [main-EventThread] - 
> Connection expired - starting a new one...
> 2014-06-08 23:14:57.817 INFO  ZooKeeper [main-EventThread] - Initiating 
> client connection, 
> connectString=host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
>  sessionTimeout=15000 
> watcher=org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1
> 2014-06-08 23:14:57.857 INFO  ConnectionManager [main-EventThread] - Waiting 
> for client to connect to ZooKeeper
> 2014-06-08 23:14:57.859 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
> Opening socket connection to server host4/172.17.14.107:1234. Will not 
> attempt to authenticate using SASL (unknown error)
> 2014-06-08 23:14:57.891 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
> Socket connection established to host4/172.17.14.107:1234, initiating session
> 2014-06-08 23:14:57.906 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
> Session establishment complete on server host4/172.17.14.107:1234, sessionid 
> = 0x4467d8d79260486, negotiated timeout = 15000
> 2014-06-08 23:14:57.907 INFO  ConnectionManager [main-EventThread] - Watcher 
> org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1 
> name:ZooKeeperConnection 
> Watcher:host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
>  got event WatchedEvent state:SyncConnected type:None path:null path:null 
> type:None
> 2014-06-08 23:14:57.909 INFO  ConnectionManager [main-EventThread] - Client 
> is connected to ZooKeeper
> 2014-06-08 23:14:57.909 INFO  ConnectionManager [main-EventThread] - 
> Connection with ZooKeeper reestablished.
> 2014-06-08 23:14:57.911 ERROR ZkController [Thread-203] - 
> :org.apache.zookeeper.KeeperException$SessionExpiredException: 
> KeeperErrorCode = Session expired fo

Re: Revert OrdinalMappingAtomicReader

2014-07-01 Thread Shai Erera
This is definitely a mistake - this class should not have been removed in
the recent API changes, as the Taxonomy API (and concept) hasn't changed. I
guess if it was under the o.a.l.facet.taxonomy package, it would stay.

Would you mind opening an issue? If you also want to try and resurrect it,
go ahead. Otherwise I'll try to do it this week.

Shai


On Mon, Jun 30, 2014 at 5:43 PM, Nicola Buso  wrote:

> Hi,
>
> in the facet module version > 4.6.1 the class OrdinalMappingAtomicReader
> disappeared.
> Is it possible to reintroduce it or give a clue of how to merge search
> indexes after the related taxonomy indexes are merged with
> DirectoryTaxonomyWriter.addTaxonomy(...)?
>
> Is it better to add a JIRA ticket for this?
>
> Please let me know,
>
>
>
> Nicola Buso
>
>
> --
> Nicola Buso
> Software Engineer - Web Production Team
>
> European Bioinformatics Institute (EMBL-EBI)
> European Molecular Biology Laboratory
>
> Wellcome Trust Genome Campus
> Hinxton
> Cambridge CB10 1SD
> United Kingdom
>
> URL: http://www.ebi.ac.uk
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


  1   2   >