[jira] [Commented] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063800#comment-15063800
 ] 

Jan Høydahl commented on SOLR-8429:
---

bq. I'm kinda against any rule which requires a user to read documentation to 
understand. The rule of thumb is if a user looks at the security.json he should 
have enough idea on what could happen.

Agree, but how can a user reading this {{security.json}} 
{code}
{"authentication": {"class": "solr.BasicAuthPlugin",  "credentials": {"solr": 
"i9buKe/RhJV5bF/46EI9xmVVYyrnbg9zXf+2FrFwcy0= OTg3"}}}
{code}
...have any clue that absolutely nothing will be protected -- unless that was 
the default? On the other hand, if he saw {{"blockUnknown":false}} in there, 
he'd be explicitly warned that it is necessary to cover every single path in 
{{AutorizationPlugin}}

Related: Should we protect the user against locking herself out, i.e. throw 
exception if {{blockUnknown}} is set through API before there are any 
registered users?


> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 
> the users can create the first security.json with that flag
> {code}
> server/scripts/cloud-scripts/zkcli.sh -z localhost:9983 -cmd put 
> /security.json '{"authentication": {"class": "solr.BasicAuthPlugin", 
> "blockUnknown": true,
> "credentials": {"solr": "orwp2Ghgj39lmnrZOTm7Qtre1VqHFDfwAEzr0ApbN3Y= 
> Ju5osoAqOX8iafhWpPP01E5P+sg8tK8tHON7rCYZRRw="}}}'
> {code}
> or add the flag later
> using the command
> {code}
> curl  http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d  '{ 
> {set-property:{blockUnknown:true}
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8220) Read field from docValues for non stored fields

2015-12-18 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063741#comment-15063741
 ] 

Ishan Chattopadhyaya edited comment on SOLR-8220 at 12/18/15 9:56 AM:
--

Btw, just found out that not all query paths actually use a DocsStreamer. I am 
checking as to what this could be down to.
EDIT: Sorry, I was seeing ghosts. Was trying this from the admin UI, but I 
hadn't set the breakpoint properly.


was (Author: ichattopadhyaya):
Btw, just found out that not all query paths actually use a DocsStreamer. I am 
checking as to what this could be down to.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-5x.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-7865:
-
Attachment: LUCENE_7865.patch

{{num * numFactor}} was being applied too many times in all {{loockup()}} 
methods.
This operation needs to be applied only once i.e. in the common {{lookup()}} 
called by all others.

[~mikemccand], please help review this ...

Thank you very much.


> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2015-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063790#comment-15063790
 ] 

Sébastien Cail commented on SOLR-7495:
--

Hi,
I m having the same problem in SOLR 5.3.0

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> 

[jira] [Commented] (SOLR-7452) json facet api returning inconsistent counts in cloud set up

2015-12-18 Thread Vishnu Mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063678#comment-15063678
 ] 

Vishnu Mishra commented on SOLR-7452:
-

Any progress on this issue...

> json facet api returning inconsistent counts in cloud set up
> 
>
> Key: SOLR-7452
> URL: https://issues.apache.org/jira/browse/SOLR-7452
> Project: Solr
>  Issue Type: Bug
>  Components: faceting
>Affects Versions: 5.1
>Reporter: Vamsi Krishna D
>  Labels: count, facet, sort
> Fix For: 5.2
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> While using the newly added feature of json term facet api 
> (http://yonik.com/json-facet-api/#TermsFacet) I am encountering inconsistent 
> returns of counts of faceted value ( Note I am running on a cloud mode of 
> solr). For example consider that i have txns_id(unique field or key), 
> consumer_number and amount. Now for a 10 million such records , lets say i 
> query for 
> q=*:*=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> the results are as follows ( some are omitted ):
> "facets":{
> "count":6641277,
> "biskatoo":{
>   "numBuckets":3112708,
>   "buckets":[{
>   "val":"surya",
>   "count":4,
>   "y":2.264506},
>   {
>   "val":"raghu",
>   "COUNT":3,   // capitalised for recognition 
>   "y":1.8},
> {
>   "val":"malli",
>   "count":4,
>   "y":1.78}]}}}
> but if i restrict the query to 
> q=consumer_number:raghu=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> i get :
>   "facets":{
> "count":4,
> "biskatoo":{
>   "numBuckets":1,
>   "buckets":[{
>   "val":"raghu",
>   "COUNT":4,
>   "y":2429708.24}]}}}
> One can see the count results are inconsistent ( and I found many occasions 
> of inconsistencies).
> I have tried the patch https://issues.apache.org/jira/browse/SOLR-7412 but 
> still the issue seems not resolved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8434) Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063696#comment-15063696
 ] 

ASF subversion and git services commented on SOLR-8434:
---

Commit 1720729 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1720729 ]

SOLR-8434: Add a flag 'blockUnknown' to BasicAuthPlugin to block 
unauthenticated requests

> Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin 
> ---
>
> Key: SOLR-8434
> URL: https://issues.apache.org/jira/browse/SOLR-8434
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.3.1
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
>
> I should be able to specify the role as {{*}} which would mean there should 
> be some user principal to access this resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8395) query-time join (with scoring) for single value numeric fields

2015-12-18 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8395:
---
Attachment: SOLR-8395.patch

I think it ready.

> query-time join (with scoring) for single value numeric fields
> --
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch, SOLR-8395.patch, SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. 
> * Alongside with that we can set _multipleValues_ parameters giving 
> _fromField_ cardinality declared in schema;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2015-12-18 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8208:
---
Attachment: SOLR-8208.patch

I added a couple of assertions.

I suppose the last 
[snippet|https://issues.apache.org/jira/browse/SOLR-8208?focusedCommentId=15063358=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15063358]
 makes much sense, just because scoring query parsers do [something|
https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java#L95]
 like this.   


> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8395) query-time join (with scoring) for single value numeric fields

2015-12-18 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063705#comment-15063705
 ] 

Cao Manh Dat commented on SOLR-8395:


Thanks for point it out to me.

> query-time join (with scoring) for single value numeric fields
> --
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch, SOLR-8395.patch, SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. 
> * Alongside with that we can set _multipleValues_ parameters giving 
> _fromField_ cardinality declared in schema;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8395) query-time join (with scoring) for single value numeric fields

2015-12-18 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063701#comment-15063701
 ] 

Cao Manh Dat edited comment on SOLR-8395 at 12/18/15 8:57 AM:
--

I think it ready.
[~mkhludnev] Did i miss or misunderstand something?


was (Author: caomanhdat):
I think it ready.

> query-time join (with scoring) for single value numeric fields
> --
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch, SOLR-8395.patch, SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. 
> * Alongside with that we can set _multipleValues_ parameters giving 
> _fromField_ cardinality declared in schema;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063735#comment-15063735
 ] 

ASF subversion and git services commented on SOLR-8429:
---

Commit 1720732 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1720732 ]

SOLR-8429 precommit error

> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 
> the users can create the first security.json with that flag
> {code}
> server/scripts/cloud-scripts/zkcli.sh -z localhost:9983 -cmd put 
> /security.json '{"authentication": {"class": "solr.BasicAuthPlugin", 
> "blockUnknown": true,
> "credentials": {"solr": "orwp2Ghgj39lmnrZOTm7Qtre1VqHFDfwAEzr0ApbN3Y= 
> Ju5osoAqOX8iafhWpPP01E5P+sg8tK8tHON7rCYZRRw="}}}'
> {code}
> or add the flag later
> using the command
> {code}
> curl  http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d  '{ 
> {set-property:{blockUnknown:true}
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061015#comment-15061015
 ] 

Dawid Weiss edited comment on LUCENE-6933 at 12/18/15 9:38 AM:
---

After some more digging and experiments it seems realistic that the following 
multi-step process will get us the goals above.
* (/) create local SVN repo with the above, preserving dummy commits so that 
version numbers match Apache's SVN
* (/) use {{git-svn}} to mirror (separately) {{lucene/java/*}}, 
{{lucene/dev/*}} and Solr's pre-merge history.
* (/) import those separate history trees into one git repo, use grafts and 
branch filtering to stitch them together.
* use https://rtyley.github.io/bfg-repo-cleaner/ to remove/ truncate binary 
blobs on the git repo
* do any finalizing cleanups (clean up any junk branches, tags, add actual 
release tags throughout the history).

I'll proceed and try to do all the above locally. If it works, I'll push a 
"test" repo to github so that folks can inspect. Everything takes ages. 
Patience.



was (Author: dweiss):
After some more digging and experiments it seems realistic that the following 
multi-step process will get us the goals above.
* cat /dev/null on all jar files (and possibly other binaries) directly on the 
SVN dump, or use https://rtyley.github.io/bfg-repo-cleaner/ to remove/ truncate 
them on the git repo
* create local SVN repo with the above, preserving dummy commits so that 
version numbers match Apache's SVN
* use {{git-svn}} to mirror (separately) {{lucene/java/*}}, {{lucene/dev/*}} 
and Solr's pre-merge history.
* import those separate history trees into one git repo, use grafts and branch 
filtering to stitch them together.
* do any finalizing cleanups (correct commit author addresses, clean up any 
junk branches, tags, add actual release tags throughout the history).

I'll proceed and try to do all the above locally. If it works, I'll push a 
"test" repo to github so that folks can inspect. Everything takes ages. 
Patience.


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15237 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15237/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-CompactStrings

2 tests failed.
FAILED:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic

Error Message:
Connection reset

Stack Trace:
java.net.SocketException: Connection reset
at 
__randomizedtesting.SeedInfo.seed([9E3E1CED58DC6B2A:35C401F88700ED04]:0)
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.lucene.replicator.http.HttpClientBase.executeGET(HttpClientBase.java:159)
at 
org.apache.lucene.replicator.http.HttpReplicator.checkForUpdate(HttpReplicator.java:51)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:196)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:402)
at 
org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 

[jira] [Updated] (SOLR-8220) Read field from docValues for non stored fields

2015-12-18 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8220:
---
Attachment: SOLR-8220.patch

Thanks for your review, Shalin. I've updated the patch to address your 
suggestions.

bq. The SolrIndexSearcher.decorateDocValueFields method has a 
honourUseDVsAsStoredFlag which is always true. We can remove it?
bq.Same for SolrIndexSearcher.getNonStoredDocValuesFieldNames?

Refactored the decorateDocValues() a bit to not send in wantsAllFields flag to 
the method and to handle it at the DocsStreamer itself. Hence, now, the 
decorateDocValues() method takes in only the field names it needs to do 
anything about; the filtering for non-stored dvs is taken care of at 
DocsStreamer.next() itself. 

Since, for the {{fl=\*}} case, we need all non-stored DVs that have 
{{useDocValuesAsStored}}=true, but for the general filtering case of 
{{fl=dv1,dv2}} we need to filter using all non-stored DVs (irrespective of the 
useDocValuesAsStored flag), I've retained this true/false logic in the 
getNonStoredDocValuesFieldNames() method. Renamed that method, however, to call 
it {{getNonStoredDVs(boolean onlyUseDocValuesAsStored)}} and added a clear 
javadoc to this effect.


bq.The wantsAllFields flag added to SolrIndexSearcher.doc doesn't seem 
necessary. I guess it was added because the patch adds non stored doc values 
fields to the 'fnames' but if we can separate out stored fnames from the 
non-stored doc values to be returned then we can remove this param from both 
SolrIndexSearcher.doc and SolrIndexSearcher.getNonStoredDocValuesFieldNames
I think the original motivation was to deal with cases {{fl=\*,nonstoredDv1}}. 
Here, the idea initially was that {{\*}} returns all stored fields, and 
nonstoredDv1 is added to it. But now, since {{\*}} takes care of all stored and 
non-stored dvs, this logic isn't needed. So, this wantsAllFields flag was a 
left over from a previous patch which I've now removed.

bq.The pattern matching in the DocStreamer constructor makes a bit nervous. 
Where is the pattern matching done for current stored fields?
Keith can weigh in on this better. However, I had a look, and found that 
responseWriters (e.g. JSONResponseWriter) get the whole SolrDocument at the 
{{writeSolrDocument()}} method, from where it does the following call to drop 
fields it doesn't need:
{code}
for (String fname : doc.getFieldNames()) {
  if (returnFields!= null && !returnFields.wantsField(fname)) {
continue;
  }
{code}
This wantsField() call uses wildcard handling.
So, reviewing this information, it seems like our handling of this at the 
DocsStreamer is fine here. It doesn't look costly to me, since it is performed 
only when fl has a pattern, and that pattern is checked against only non-stored 
DVs. Do you think there's something better that can be done which I'm missing?

bq.The conditional logic in SolrIndexSearcher.decorateDocValueFields for 
multi-valued fields is too complicated! Can we please simplify this?
Made it simpler. :-)


> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-5x.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"

[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-12-18 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063741#comment-15063741
 ] 

Ishan Chattopadhyaya commented on SOLR-8220:


Btw, just found out that not all query paths actually use a DocsStreamer. I am 
checking as to what this could be down to.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-5x.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063806#comment-15063806
 ] 

Dawid Weiss commented on LUCENE-6933:
-

Everything looks good so far. I stitched Solr's and Lucene history beautifully 
locally. Lots of interesting plot twists on the way.

Had to restart git-svn fetches because it occurred to me that:
1) the source of git-svn cannot be my local mirror (because it'd show in commit 
logs); if not for anything else, then for legal reasons we should fetch from 
Apache's SVN directly,
2) fixing author entries is easier in git-svn (via authors.txt).

{{while (!successfull()) retry();}}


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15238 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15238/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:48462/_p/sg","node_name":"127.0.0.1:48462__p%2Fsg","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:38405/_p/sg;,   
"node_name":"127.0.0.1:38405__p%2Fsg",   "state":"down"}, 
"core_node2":{   "state":"down",   
"base_url":"http://127.0.0.1:48225/_p/sg;,   
"core":"c8n_1x3_lf_shard1_replica1",   
"node_name":"127.0.0.1:48225__p%2Fsg"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:48462/_p/sg;,   
"node_name":"127.0.0.1:48462__p%2Fsg",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:48462/_p/sg","node_name":"127.0.0.1:48462__p%2Fsg","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:38405/_p/sg;,
  "node_name":"127.0.0.1:38405__p%2Fsg",
  "state":"down"},
"core_node2":{
  "state":"down",
  "base_url":"http://127.0.0.1:48225/_p/sg;,
  "core":"c8n_1x3_lf_shard1_replica1",
  "node_name":"127.0.0.1:48225__p%2Fsg"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:48462/_p/sg;,
  "node_name":"127.0.0.1:48462__p%2Fsg",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([77ED239E1806A069:FFB91C44B6FACD91]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:171)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[jira] [Created] (LUCENE-6940) Bulk scoring could speed up MUST_NOT clauses

2015-12-18 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6940:


 Summary: Bulk scoring could speed up MUST_NOT clauses
 Key: LUCENE-6940
 URL: https://issues.apache.org/jira/browse/LUCENE-6940
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


Today when you have MUST_NOT clauses, the ReqExclScorer is used and needs to 
check the excluded clauses on every iteration. I suspect we could speed things 
up by having a BulkScorer that would advance the excluded clause first and then 
tell the required clause to bulk score up to the next excluded document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6940) Bulk scoring could speed up MUST_NOT clauses

2015-12-18 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6940:
-
Attachment: LUCENE-6940.patch

Here is a quick patch (disclaimer: not commented and not tested) to demonstrate 
the idea. It makes the new bulk scorer used either:
 - when there is a single FILTER/MUST clause, no SHOULD clauses, and some 
MUST_NOT clauses
 - or when there are some SHOULD clauses, no FILTER_MUST clauses and some 
MUST_NOT clauses

I added some tasks to wikimedium.10M.nostopwords.tasks and ran it through 
luceneutil. As expected this seems to especially yield a speedup when the 
negative clauses match many less documents than the positive clauses.

{noformat}
diff --git a/tasks/wikimedium.10M.nostopwords.tasks 
b/tasks/wikimedium.10M.nostopwords.tasks
index 342070c..8991121 100644
--- a/tasks/wikimedium.10M.nostopwords.tasks
+++ b/tasks/wikimedium.10M.nostopwords.tasks
@@ -13361,3 +13361,19 @@ OrNotHighLow: -do necessities # freq=511178 freq=1195
 OrHighNotLow: do -necessities # freq=511178 freq=1195
 OrNotHighLow: -had halfback # freq=1246743 freq=1205
 OrHighNotLow: had -halfback # freq=1246743 freq=1205
+AllNotHigh: *:* -been # freq=1041183
+AllNotHigh: *:* -states # freq=1034872
+AllNotHigh: *:* -time # freq=1032071
+AllNotHigh: *:* -when # freq=1027487
+AllNotLow: *:* -factor # freq=37866
+AllNotLow: *:* -migration # freq=37862
+AllNotLow: *:* -maintained # freq=37840
+AllNotLow: *:* -norwegian # freq=37836
+OrHighHighNotLow: several following -factor # freq=436129 freq=416515 
freq=37866
+OrHighHighNotLow: publisher end -migration # freq=1289029 freq=526636 
freq=37862
+OrHighHighNotLow: 2009 film -maintaine # freq=887702 freq=432758 freq=37840
+OrHighHighNotLow: http known -norwegian # freq=3493581 freq=607158 freq=37836
+OrHighLowNotHigh: 2005 jorgensen -been # freq=835460 freq=837 freq=1041183
+OrHighLowNotHigh: like undivided -states # freq=479390 freq=1512 freq=1034872
+OrHighLowNotHigh: use coy -time # freq=597053 freq=1198 freq=1032071
+OrHighLowNotHigh: been highperformanceengines -when # freq=1041183 freq=1155 
freq=1027487
{noformat}

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
OrHighLowNotHigh   19.54  (2.6%)   18.59  (4.2%)   
-4.9% ( -11% -1%)
   OrHighMed   34.32  (3.5%)   33.03  (4.8%)   
-3.7% ( -11% -4%)
  OrHighHigh   26.95  (3.7%)   25.97  (4.9%)   
-3.6% ( -11% -5%)
  Fuzzy2   82.74 (16.0%)   80.29 (16.4%)   
-3.0% ( -30% -   35%)
  AndHighLow  502.91  (5.7%)  496.63  (3.0%)   
-1.2% (  -9% -7%)
  AndHighMed  236.44  (2.9%)  234.34  (2.6%)   
-0.9% (  -6% -4%)
OrNotHighMed  222.75  (2.9%)  220.87  (2.4%)   
-0.8% (  -5% -4%)
 Respell   60.25  (3.0%)   60.38  (2.7%)
0.2% (  -5% -6%)
 MedSloppyPhrase   21.73  (2.3%)   21.92  (2.5%)
0.8% (  -3% -5%)
  Fuzzy1   57.18  (8.0%)   57.78  (5.8%)
1.1% ( -11% -   16%)
 LowSloppyPhrase   25.96  (1.9%)   26.24  (2.1%)
1.1% (  -2% -5%)
HighSloppyPhrase   29.99  (2.5%)   30.37  (2.7%)
1.3% (  -3% -6%)
   MedPhrase   60.11  (2.8%)   61.15  (3.1%)
1.7% (  -4% -7%)
 AndHighHigh   32.86  (3.0%)   33.56  (3.0%)
2.1% (  -3% -8%)
   LowPhrase   59.36  (2.7%)   60.69  (3.2%)
2.2% (  -3% -8%)
   OrHighLow   78.50  (3.6%)   80.33  (4.3%)
2.3% (  -5% -   10%)
  HighPhrase   17.32  (2.1%)   17.73  (1.9%)
2.4% (  -1% -6%)
 LowSpanNear   34.90  (2.8%)   35.75  (2.4%)
2.4% (  -2% -7%)
 MedSpanNear   30.83  (2.9%)   31.59  (2.0%)
2.4% (  -2% -7%)
OrNotHighLow  982.57  (4.2%) 1009.18  (2.8%)
2.7% (  -4% -   10%)
HighSpanNear   10.39  (3.8%)   10.76  (3.7%)
3.5% (  -3% -   11%)
Wildcard   64.30  (4.2%)   67.27  (5.2%)
4.6% (  -4% -   14%)
HighTerm  110.90  (5.2%)  117.51  (6.7%)
6.0% (  -5% -   18%)
 MedTerm  155.42  (5.3%)  165.05  (6.9%)
6.2% (  -5% -   19%)
   OrNotHighHigh   40.19  (1.9%)   42.69  (3.2%)
6.2% (   1% -   11%)
 Prefix3   87.35  (6.2%)   93.98  (6.9%)
7.6% (  -5% -   22%)
 LowTerm  574.81  (9.0%)  625.04  (9.6%)
8.7% (  -9% -   30%)
  IntNRQ   11.95  

[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063848#comment-15063848
 ] 

Dawid Weiss commented on LUCENE-6933:
-

Does anybody know scala? I'd love to filter the JAR files to zero size using 
https://rtyley.github.io/bfg-repo-cleaner/ but the source code is way beyond my 
comprehension.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8440) Script support for enabling basic auth

2015-12-18 Thread JIRA
Jan Høydahl created SOLR-8440:
-

 Summary: Script support for enabling basic auth
 Key: SOLR-8440
 URL: https://issues.apache.org/jira/browse/SOLR-8440
 Project: Solr
  Issue Type: New Feature
  Components: scripts and tools
Reporter: Jan Høydahl


Now that BasicAuthPlugin will be able to work without an AuthorizationPlugin 
(SOLR-8429), it would be sweet to provide a super simple way to "Password 
protect Solr"™ right from the command line:

{noformat}
bin/solr basicAuth -adduser -user solr -pass SolrRocks
{noformat}

It would take the mystery out of enabling one single password across the board. 
The command would do something like this
# Check if HTTPS is enabled, and if not, print a friendly warning
# Check if {{/security.json}} already exists
## NO => create one with only plugin class defined
## YES => Abort if exists but plugin is not {{BasicAuthPlugin}}
# Using security REST API, add the new user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5481 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5481/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseConcMarkSweepGC

11 tests failed.
FAILED:  org.apache.solr.cloud.DistribJoinFromCollectionTest.test

Error Message:
Error from server at http://127.0.0.1:58024/to_2x2_shard2_replica1: SolrCloud 
join: from_1x2 has a local replica (from_1x2_shard1_replica2) on 
127.0.0.1:58024_, but it is down

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58024/to_2x2_shard2_replica1: SolrCloud join: 
from_1x2 has a local replica (from_1x2_shard1_replica2) on 127.0.0.1:58024_, 
but it is down
at 
__randomizedtesting.SeedInfo.seed([EBD2B3A3ED7C328F:63868C7943805F77]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.DistribJoinFromCollectionTest.testJoins(DistribJoinFromCollectionTest.java:135)
at 
org.apache.solr.cloud.DistribJoinFromCollectionTest.test(DistribJoinFromCollectionTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 

[jira] [Updated] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2015-12-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8048:
--
   Labels: authentication security  (was: )
Fix Version/s: 5.5
  Component/s: (was: ip)
   scripts and tools

> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: authentication, security
> Fix For: 5.5
>
>
> It should be possible to pass the user name as a param say {{-user 
> solr:SolrRocks}} or alternately it should prompt for user name and password



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2015-12-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8048:
--
Component/s: ip

> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>  Components: ip
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> It should be possible to pass the user name as a param say {{-user 
> solr:SolrRocks}} or alternately it should prompt for user name and password



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7525) Add ComplementStream to the Streaming API and Streaming Expressions

2015-12-18 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063898#comment-15063898
 ] 

Joel Bernstein commented on SOLR-7525:
--

Let's not change the GroupOperation because it has useful functionality. Let's 
create a new ReduceOperation that behaves the way we need it to.

> Add ComplementStream to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-7525
> URL: https://issues.apache.org/jira/browse/SOLR-7525
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7525.patch
>
>
> This ticket adds a ComplementStream to the Streaming API and Streaming 
> Expression language.
> The ComplementStream will wrap two TupleStreams (StreamA, StreamB) and emit 
> Tuples from StreamA that are not in StreamB.
> Streaming API Syntax:
> {code}
> ComplementStream cstream = new ComplementStream(streamA, streamB, comp);
> {code}
> Streaming Expression syntax:
> {code}
> complement(search(...), search(...), on(...))
> {code}
> Internal implementation will rely on the ReducerStream. The ComplementStream 
> can be parallelized using the ParallelStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7525) Add ComplementStream to the Streaming API and Streaming Expressions

2015-12-18 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063898#comment-15063898
 ] 

Joel Bernstein edited comment on SOLR-7525 at 12/18/15 12:34 PM:
-

Let's not change the GroupOperation because it has useful functionality. Let's 
create a new ReduceOperation that behaves the way we need it to.

The main reason for adding ReduceOperations was so that we could specialize the 
reduce behavior.


was (Author: joel.bernstein):
Let's not change the GroupOperation because it has useful functionality. Let's 
create a new ReduceOperation that behaves the way we need it to.

> Add ComplementStream to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-7525
> URL: https://issues.apache.org/jira/browse/SOLR-7525
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7525.patch
>
>
> This ticket adds a ComplementStream to the Streaming API and Streaming 
> Expression language.
> The ComplementStream will wrap two TupleStreams (StreamA, StreamB) and emit 
> Tuples from StreamA that are not in StreamB.
> Streaming API Syntax:
> {code}
> ComplementStream cstream = new ComplementStream(streamA, streamB, comp);
> {code}
> Streaming Expression syntax:
> {code}
> complement(search(...), search(...), on(...))
> {code}
> Internal implementation will rely on the ReducerStream. The ComplementStream 
> can be parallelized using the ParallelStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063900#comment-15063900
 ] 

ASF subversion and git services commented on SOLR-8433:
---

Commit 1720768 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1720768 ]

SOLR-8433: Adding logging for schemes

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
>[junit4]   2>  ... 29 more
>[junit4]   2> Caused by: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:146)
>

[jira] [Created] (SOLR-8441) maxScore is sometimes missing from distributed grouped responses

2015-12-18 Thread Julien MASSENET (JIRA)
Julien MASSENET created SOLR-8441:
-

 Summary: maxScore is sometimes missing from distributed grouped 
responses
 Key: SOLR-8441
 URL: https://issues.apache.org/jira/browse/SOLR-8441
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 5.3
Reporter: Julien MASSENET
Priority: Minor


This issue occurs when using the grouping feature in distributed mode and 
sorting by score.

Each group's {{docList}} in the response is supposed to contain a {{maxScore}} 
entry that hold the maximum score for that group. Using the current releases, 
it sometimes happens that this piece of information is not included:

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8441) maxScore is sometimes missing from distributed grouped responses

2015-12-18 Thread Julien MASSENET (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien MASSENET updated SOLR-8441:
--
  Flags: Patch
Description: 
This issue occurs when using the grouping feature in distributed mode and 
sorting by score.

Each group's {{docList}} in the response is supposed to contain a {{maxScore}} 
entry that hold the maximum score for that group. Using the current releases, 
it sometimes happens that this piece of information is not included:

{code}
{
  "responseHeader": {
"status": 0,
"QTime": 42,
"params": {
  "sort": "score desc",
  "fl": "id,score",
  "q": "_text_:\"72\"",
  "group.limit": "2",
  "group.field": "group2",
  "group.sort": "score desc",
  "group": "true",
  "wt": "json",
  "fq": "group2:72 OR group2:45"
}
  },
  "grouped": {
"group2": {
  "matches": 567,
  "groups": [
{
  "groupValue": 72,
  "doclist": {
"numFound": 562,
"start": 0,
"maxScore": 2.0378063,
"docs": [
  {
"id": "29!26551",
"score": 2.0378063
  },
  {
"id": "78!11462",
"score": 2.0298104
  }
]
  }
},
{
  "groupValue": 45,
  "doclist": {
"numFound": 5,
"start": 0,
"docs": [
  {
"id": "72!8569",
"score": 1.8988966
  },
  {
"id": "72!14075",
"score": 1.5191172
  }
]
  }
}
  ]
}
  }
}
{code}

Looking into the issue, it comes from the fact that if a shard does not contain 
a document from that group, trying to merge its {{maxScore}} with real 
{{maxScore}} entries from other shards is invalid (it results in NaN).

I'm attaching a patch containing a fix.

  was:
This issue occurs when using the grouping feature in distributed mode and 
sorting by score.

Each group's {{docList}} in the response is supposed to contain a {{maxScore}} 
entry that hold the maximum score for that group. Using the current releases, 
it sometimes happens that this piece of information is not included:

{code}


> maxScore is sometimes missing from distributed grouped responses
> 
>
> Key: SOLR-8441
> URL: https://issues.apache.org/jira/browse/SOLR-8441
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.3
>Reporter: Julien MASSENET
>Priority: Minor
>
> This issue occurs when using the grouping feature in distributed mode and 
> sorting by score.
> Each group's {{docList}} in the response is supposed to contain a 
> {{maxScore}} entry that hold the maximum score for that group. Using the 
> current releases, it sometimes happens that this piece of information is not 
> included:
> {code}
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 42,
> "params": {
>   "sort": "score desc",
>   "fl": "id,score",
>   "q": "_text_:\"72\"",
>   "group.limit": "2",
>   "group.field": "group2",
>   "group.sort": "score desc",
>   "group": "true",
>   "wt": "json",
>   "fq": "group2:72 OR group2:45"
> }
>   },
>   "grouped": {
> "group2": {
>   "matches": 567,
>   "groups": [
> {
>   "groupValue": 72,
>   "doclist": {
> "numFound": 562,
> "start": 0,
> "maxScore": 2.0378063,
> "docs": [
>   {
> "id": "29!26551",
> "score": 2.0378063
>   },
>   {
> "id": "78!11462",
> "score": 2.0298104
>   }
> ]
>   }
> },
> {
>   "groupValue": 45,
>   "doclist": {
> "numFound": 5,
> "start": 0,
> "docs": [
>   {
> "id": "72!8569",
> "score": 1.8988966
>   },
>   {
> "id": "72!14075",
> "score": 1.5191172
>   }
> ]
>   }
> }
>   ]
> }
>   }
> }
> {code}
> Looking into the issue, it comes from the fact that if a shard does not 
> contain a document from that group, trying to merge its {{maxScore}} with 
> real {{maxScore}} entries from other shards is invalid (it results in NaN).
> I'm attaching a patch containing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-8441) maxScore is sometimes missing from distributed grouped responses

2015-12-18 Thread Julien MASSENET (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien MASSENET updated SOLR-8441:
--
Attachment: lucene_solr_5_3-GroupingMaxScore.patch

> maxScore is sometimes missing from distributed grouped responses
> 
>
> Key: SOLR-8441
> URL: https://issues.apache.org/jira/browse/SOLR-8441
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.3
>Reporter: Julien MASSENET
>Priority: Minor
> Attachments: lucene_solr_5_3-GroupingMaxScore.patch
>
>
> This issue occurs when using the grouping feature in distributed mode and 
> sorting by score.
> Each group's {{docList}} in the response is supposed to contain a 
> {{maxScore}} entry that hold the maximum score for that group. Using the 
> current releases, it sometimes happens that this piece of information is not 
> included:
> {code}
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 42,
> "params": {
>   "sort": "score desc",
>   "fl": "id,score",
>   "q": "_text_:\"72\"",
>   "group.limit": "2",
>   "group.field": "group2",
>   "group.sort": "score desc",
>   "group": "true",
>   "wt": "json",
>   "fq": "group2:72 OR group2:45"
> }
>   },
>   "grouped": {
> "group2": {
>   "matches": 567,
>   "groups": [
> {
>   "groupValue": 72,
>   "doclist": {
> "numFound": 562,
> "start": 0,
> "maxScore": 2.0378063,
> "docs": [
>   {
> "id": "29!26551",
> "score": 2.0378063
>   },
>   {
> "id": "78!11462",
> "score": 2.0298104
>   }
> ]
>   }
> },
> {
>   "groupValue": 45,
>   "doclist": {
> "numFound": 5,
> "start": 0,
> "docs": [
>   {
> "id": "72!8569",
> "score": 1.8988966
>   },
>   {
> "id": "72!14075",
> "score": 1.5191172
>   }
> ]
>   }
> }
>   ]
> }
>   }
> }
> {code}
> Looking into the issue, it comes from the fact that if a shard does not 
> contain a document from that group, trying to merge its {{maxScore}} with 
> real {{maxScore}} entries from other shards is invalid (it results in NaN).
> I'm attaching a patch containing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-18 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063904#comment-15063904
 ] 

Joel Bernstein commented on SOLR-8433:
--

We had another test failure with the same stack trace. Again the seed does not 
reproduce locally. The printout from the logging is below. The 
SystemDefaultHttpClient is used as expected. The URL has two slash before the 
collection, but this runs locally (Mac).

[junit4]   2> 1638545 INFO  (qtp1017581784-14858) [x:collection1] 
o.a.s.h.c.IterativeMergeStrategy  SHARD ADDRESSS 
##:http://127.0.0.1:53098//collection1
   [junit4]   2> 1638545 INFO  (qtp1017581784-14858) [x:collection1] 
o.a.s.h.c.IterativeMergeStrategy  HTTP Client #:class 
org.apache.http.impl.client.SystemDefaultHttpClient
   [junit4]   2> 1638545 INFO  (qtp1017581784-14858) [x:collection1] 
o.a.s.h.c.IterativeMergeStrategy  SHARD ADDRESSS 
##:http://127.0.0.1:36927//collection1
   [junit4]   2> 1638545 INFO  (qtp1017581784-14858) [x:collection1] 
o.a.s.h.c.IterativeMergeStrategy  HTTP Client #:class 
org.apache.http.impl.client.SystemDefaultHttpClient
   [junit4]   2> 1638546 INFO  (qtp1017581784-14858) [x:collection1] 
o.a.s.h.c.IterativeMergeStrategy  SHARD ADDRESSS 
##:http://127.0.0.1:38826//collection1
   [junit4]   2> 1638546 INFO  (qtp1017581784-14858) [x:collection1] 
o.a.s.h.c.IterativeMergeStrategy  HTTP Client #:class 
org.apache.http.impl.client.SystemDefaultHttpClient

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>

[jira] [Created] (SOLR-8442) Command line tool to enable SSL

2015-12-18 Thread JIRA
Jan Høydahl created SOLR-8442:
-

 Summary: Command line tool to enable SSL
 Key: SOLR-8442
 URL: https://issues.apache.org/jira/browse/SOLR-8442
 Project: Solr
  Issue Type: New Feature
  Components: scripts and tools
Reporter: Jan Høydahl


To simplify the task of enabling SSL in Solr, suggest to
* Document in refGuide how to achieve a trusted SSL cert through 
[letsencrypt.com|http://letsencrypt.com] instead of generating self-signed 
(self signed can still be documented as an alternative)
* Create a {{bin/solr ssl}} tool to assist in converting from {{pem}} to 
{{jks}}, installing the files, updating {{solr.in.sh}}(?), enabling 
{{urlScheme}} etc

This JIRA should probably have a few sub tasks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15239 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15239/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseParallelGC -XX:-CompactStrings

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:38561/r_x/p/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38561/r_x/p/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([EAF5BB4F7DE50336:62A18495D3196ECE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:638)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-18 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063905#comment-15063905
 ] 

Joel Bernstein commented on SOLR-8433:
--

I've adding more logging to see the schemes that are present in the http client.

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
>[junit4]   2>  ... 29 more
>[junit4]   2> Caused by: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:146)
>[junit4]   2>  at 
> 

[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063906#comment-15063906
 ] 

Dawid Weiss commented on LUCENE-6933:
-

Nevermind, I did it myself.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-18 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063905#comment-15063905
 ] 

Joel Bernstein edited comment on SOLR-8433 at 12/18/15 12:44 PM:
-

I've added more logging to see the schemes that are present in the http client.


was (Author: joel.bernstein):
I've adding more logging to see the schemes that are present in the http client.

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
>[junit4]   2>  ... 29 more
>[junit4]   2> Caused by: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> 

[jira] [Commented] (SOLR-8015) HdfsLock may fail to close a FileSystem instance if it cannot immediately obtain an index lock.

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063910#comment-15063910
 ] 

ASF subversion and git services commented on SOLR-8015:
---

Commit 1720773 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1720773 ]

SOLR-8015: HdfsLock may fail to close a FileSystem instance if it cannot 
immediately obtain an index lock.

> HdfsLock may fail to close a FileSystem instance if it cannot immediately 
> obtain an index lock.
> ---
>
> Key: SOLR-8015
> URL: https://issues.apache.org/jira/browse/SOLR-8015
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.5
>
> Attachments: SOLR-8015.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8015) HdfsLock may fail to close a FileSystem instance if it cannot immediately obtain an index lock.

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063915#comment-15063915
 ] 

ASF subversion and git services commented on SOLR-8015:
---

Commit 1720775 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720775 ]

SOLR-8015: HdfsLock may fail to close a FileSystem instance if it cannot 
immediately obtain an index lock.

> HdfsLock may fail to close a FileSystem instance if it cannot immediately 
> obtain an index lock.
> ---
>
> Key: SOLR-8015
> URL: https://issues.apache.org/jira/browse/SOLR-8015
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.5
>
> Attachments: SOLR-8015.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8015) HdfsLock may fail to close a FileSystem instance if it cannot immediately obtain an index lock.

2015-12-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8015.
---
   Resolution: Fixed
Fix Version/s: Trunk

> HdfsLock may fail to close a FileSystem instance if it cannot immediately 
> obtain an index lock.
> ---
>
> Key: SOLR-8015
> URL: https://issues.apache.org/jira/browse/SOLR-8015
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8015.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063918#comment-15063918
 ] 

Noble Paul commented on SOLR-8429:
--

bq.have any clue that absolutely nothing will be protected – unless that was 
the default? 

A person configuring security will follow our documentation. Our documentation 
will have {{blockUnknown=true}} in the sample. So his setup is be protected 
automatically.

bq.Related: Should we protect the user against locking herself out, 

Nice to have. Anyway he has the option of overwriting the {{security.json}} if 
he screws up badly


> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 
> the users can create the first security.json with that flag
> {code}
> server/scripts/cloud-scripts/zkcli.sh -z localhost:9983 -cmd put 
> /security.json '{"authentication": {"class": "solr.BasicAuthPlugin", 
> "blockUnknown": true,
> "credentials": {"solr": "orwp2Ghgj39lmnrZOTm7Qtre1VqHFDfwAEzr0ApbN3Y= 
> Ju5osoAqOX8iafhWpPP01E5P+sg8tK8tHON7rCYZRRw="}}}'
> {code}
> or add the flag later
> using the command
> {code}
> curl  http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d  '{ 
> {set-property:{blockUnknown:true}
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063918#comment-15063918
 ] 

Noble Paul edited comment on SOLR-8429 at 12/18/15 12:50 PM:
-

bq.have any clue that absolutely nothing will be protected – unless that was 
the default? 

A person configuring security will follow our documentation. Our documentation 
will have {{blockUnknown=true}} in the sample. So his setup is protected 
automatically.

bq.Related: Should we protect the user against locking herself out, 

Nice to have. Anyway he has the option of overwriting the {{security.json}} if 
he screws up badly



was (Author: noble.paul):
bq.have any clue that absolutely nothing will be protected – unless that was 
the default? 

A person configuring security will follow our documentation. Our documentation 
will have {{blockUnknown=true}} in the sample. So his setup is be protected 
automatically.

bq.Related: Should we protect the user against locking herself out, 

Nice to have. Anyway he has the option of overwriting the {{security.json}} if 
he screws up badly


> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 
> the users can create the first security.json with that flag
> {code}
> server/scripts/cloud-scripts/zkcli.sh -z localhost:9983 -cmd put 
> /security.json '{"authentication": {"class": "solr.BasicAuthPlugin", 
> "blockUnknown": true,
> "credentials": {"solr": "orwp2Ghgj39lmnrZOTm7Qtre1VqHFDfwAEzr0ApbN3Y= 
> Ju5osoAqOX8iafhWpPP01E5P+sg8tK8tHON7rCYZRRw="}}}'
> {code}
> or add the flag later
> using the command
> {code}
> curl  http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d  '{ 
> {set-property:{blockUnknown:true}
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063927#comment-15063927
 ] 

Noble Paul commented on SOLR-8429:
--

Commit 1720729 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1720729 ]

SOLR-8429: Add a flag 'blockUnknown' to BasicAuthPlugin to block 
unauthenticated requests

> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 
> the users can create the first security.json with that flag
> {code}
> server/scripts/cloud-scripts/zkcli.sh -z localhost:9983 -cmd put 
> /security.json '{"authentication": {"class": "solr.BasicAuthPlugin", 
> "blockUnknown": true,
> "credentials": {"solr": "orwp2Ghgj39lmnrZOTm7Qtre1VqHFDfwAEzr0ApbN3Y= 
> Ju5osoAqOX8iafhWpPP01E5P+sg8tK8tHON7rCYZRRw="}}}'
> {code}
> or add the flag later
> using the command
> {code}
> curl  http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d  '{ 
> {set-property:{blockUnknown:true}
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-8434) Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin

2015-12-18 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8434:
-
Comment: was deleted

(was: Commit 1720729 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1720729 ]

SOLR-8434: Add a flag 'blockUnknown' to BasicAuthPlugin to block 
unauthenticated requests)

> Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin 
> ---
>
> Key: SOLR-8434
> URL: https://issues.apache.org/jira/browse/SOLR-8434
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.3.1
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
>
> I should be able to specify the role as {{*}} which would mean there should 
> be some user principal to access this resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5209) last replica removal cascades to remove shard from clusterstate

2015-12-18 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-5209:
--
Attachment: SOLR-5209.patch

Attaching updated patch against trunk.

> last replica removal cascades to remove shard from clusterstate
> ---
>
> Key: SOLR-5209
> URL: https://issues.apache.org/jira/browse/SOLR-5209
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Christine Poerschke
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-5209.patch, SOLR-5209.patch
>
>
> The problem we saw was that unloading of an only replica of a shard deleted 
> that shard's info from the clusterstate. Once it was gone then there was no 
> easy way to re-create the shard (other than dropping and re-creating the 
> whole collection's state).
> This seems like a bug?
> Overseer.java around line 600 has a comment and commented out code:
> // TODO TODO TODO!!! if there are no replicas left for the slice, and the 
> slice has no hash range, remove it
> // if (newReplicas.size() == 0 && slice.getRange() == null) {
> // if there are no replicas left for the slice remove it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063930#comment-15063930
 ] 

ASF subversion and git services commented on SOLR-8429:
---

Commit 1720777 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720777 ]

SOLR-8429: Add a flag 'blockUnknown' to BasicAuthPlugin to block 
unauthenticated requests

> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 
> the users can create the first security.json with that flag
> {code}
> server/scripts/cloud-scripts/zkcli.sh -z localhost:9983 -cmd put 
> /security.json '{"authentication": {"class": "solr.BasicAuthPlugin", 
> "blockUnknown": true,
> "credentials": {"solr": "orwp2Ghgj39lmnrZOTm7Qtre1VqHFDfwAEzr0ApbN3Y= 
> Ju5osoAqOX8iafhWpPP01E5P+sg8tK8tHON7rCYZRRw="}}}'
> {code}
> or add the flag later
> using the command
> {code}
> curl  http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d  '{ 
> {set-property:{blockUnknown:true}
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063934#comment-15063934
 ] 

Mark Miller commented on SOLR-8416:
---

Thanks Michael,

* Looks like a bunch of imports were moved above the license header?
* We probably want to use real solr.xml config for this. Or make it params for 
the collection create call with reasonable defaults. We generally only use 
system properties for kind of internal fail safe options we don't expect to 
really be used. I'd be fine with reasonable defaults that could be overridden 
per collection create call, but we could also allow the defaults to be 
configurable via solr.xml.
{code}
+Integer numRetries = 
Integer.getInteger("createCollectionWaitTimeTillActive", 10);
+Boolean checkLeaderOnly = 
Boolean.getBoolean("createCollectionCheckLeaderActive");
{code}
* We should handle the checked exceptions this might throw like we do in other 
spots rather than use a catch-all Exception. There should be plenty of code to 
reference where we handle keeper and interrupted exception and do the right 
thing for each.
{code}
+  try {
+zkStateReader.updateClusterState();
+clusterState = zkStateReader.getClusterState();
+  }  catch (Exception e) {
+throw new SolrException(ErrorCode.SERVER_ERROR, "Can't connect to zk 
server", e);
+  }
{code}
* I'd probably combine the following into one IF statement:
{code}
+  if (!clusterState.liveNodesContain(replica.getNodeName())) {
+replicaNotAlive = replica.getCoreUrl();
+nodeNotLive = replica.getNodeName();
+break;
+  }
+  if (!state.equals(Replica.State.ACTIVE.toString())) {
+replicaNotAlive = replica.getCoreUrl();
+replicaState = state;
+break;
+  }
{code}
* Should probably restore interrupt status and throw a SolrException.
{code}
+  try {
+Thread.sleep(1000);
+  } catch (InterruptedException e) {
+Thread.currentThread().interrupt();
+  }
{code}
* I'm not sure the return message is quite right. If a nodes state is not 
ACTIVE, it does not mean it's not Live. It can be DOWN and live or RECOVERING 
and Live, etc. A replica is either Live or not and then has a Live State if and 
only if it is Live.
* Needs some tests.

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2015-12-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063936#comment-15063936
 ] 

Noble Paul commented on SOLR-8146:
--

I see that a regex is used for expressing the affinity.

I would rather have something like the replica placement rule and piggy back on 
same syntax

examples 
{code}
preferredNodes=host:
{code}
you can implement new snitches such as DCAwareSnitch or RackAwareSnitch and add 
to the patch and use rules like

{code}
preferredNodes=dc:DC2
prefrredNodes=rack:RACK3
{code}

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_66) - Build # 14942 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14942/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testStopAllStartAll

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([A37C21706DAA3BF:7C09DD6447ED0E90]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:366)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:407)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:357)
at 
org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll(TestMiniSolrCloudCluster.java:421)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 708 - Still Failing

2015-12-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/708/

1 tests failed.
FAILED:  
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck

Error Message:
Could not find a healthy node to handle the request.

Stack Trace:
org.apache.solr.common.SolrException: Could not find a healthy node to handle 
the request.
at 
__randomizedtesting.SeedInfo.seed([A3C1F353237F8CCB:4D124D83A693B62F]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck(PingRequestHandlerTest.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8364) SpellCheckComponentTest occasionally fails

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064083#comment-15064083
 ] 

ASF subversion and git services commented on SOLR-8364:
---

Commit 1720810 from jd...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1720810 ]

SOLR-8364: fix test bug

> SpellCheckComponentTest occasionally fails
> --
>
> Key: SOLR-8364
> URL: https://issues.apache.org/jira/browse/SOLR-8364
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: Trunk
>Reporter: James Dyer
>Priority: Minor
> Attachments: SOLR-8364.patch
>
>
> This failure did not reproduce for me in Linux or Windows with the same seed.
> {quote}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5439/
> : Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
> : 
> : 1 tests failed.
> : FAILED:  org.apache.solr.handler.component.SpellCheckComponentTest.test
> : 
> : Error Message:
> : List size mismatch @ spellcheck/suggestions
> : 
> : Stack Trace:
> : java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6936) TestDimensionalRangeQuery failures: AIOOBE while merging

2015-12-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6936.

   Resolution: Fixed
Fix Version/s: Trunk

This was a fun corner case, where we merged N segments such that all documents 
that did have dimensional values were deleted, and then later tried to merge 
that merged segment ...

> TestDimensionalRangeQuery failures: AIOOBE while merging 
> -
>
> Key: LUCENE-6936
> URL: https://issues.apache.org/jira/browse/LUCENE-6936
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Steve Rowe
>Assignee: Michael McCandless
> Fix For: Trunk
>
>
> From [http://jenkins.sarowe.net/job/Lucene-Solr-Nightly-trunk/105/] - neither 
> failure reproduced for me on the same box:
> {noformat}
>[junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestDimensionalRangeQuery -Dtests.method=testRandomLongsBig 
> -Dtests.seed=BEF1D45ADA12B09B -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=cs_CZ -Dtests.timezone=Africa/Porto-Novo -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   43.4s J5  | TestDimensionalRangeQuery.testRandomLongsBig 
> <<<
>[junit4]> Throwable #1: 
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([BEF1D45ADA12B09B:95C7B6D701973443]:0)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:714)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:728)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1459)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1242)
>[junit4]>at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:170)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.verifyLongs(TestDimensionalRangeQuery.java:208)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.doTestRandomLongs(TestDimensionalRangeQuery.java:147)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsBig(TestDimensionalRangeQuery.java:114)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1024
>[junit4]>at 
> org.apache.lucene.util.bkd.BKDWriter$MergeReader.next(BKDWriter.java:279)
>[junit4]>at 
> org.apache.lucene.util.bkd.BKDWriter.merge(BKDWriter.java:413)
>[junit4]>at 
> org.apache.lucene.codecs.lucene60.Lucene60DimensionalWriter.merge(Lucene60DimensionalWriter.java:159)
>[junit4]>at 
> org.apache.lucene.index.SegmentMerger.mergeDimensionalValues(SegmentMerger.java:168)
>[junit4]>at 
> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:117)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4062)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3642)
>[junit4]>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-Nightly-trunk/workspace/lucene/build/core/test/J5/temp/lucene.search.TestDimensionalRangeQuery_BEF1D45ADA12B09B-001
>[junit4]   2> Dec 15, 2015 11:03:38 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: Thread[Lucene Merge 
> Thread #634,5,TGRP-TestDimensionalRangeQuery]
>[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.ArrayIndexOutOfBoundsException: 1024
>[junit4]   2>at 
> __randomizedtesting.SeedInfo.seed([BEF1D45ADA12B09B]:0)
>[junit4]   2>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>[junit4]   2>at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
>

[jira] [Commented] (SOLR-8407) We log an error during normal recovery PeerSync now.

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064098#comment-15064098
 ] 

ASF subversion and git services commented on SOLR-8407:
---

Commit 1720813 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1720813 ]

SOLR-8407: We log an error during normal recovery PeerSync now.

> We log an error during normal recovery PeerSync now.
> 
>
> Key: SOLR-8407
> URL: https://issues.apache.org/jira/browse/SOLR-8407
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> {noformat}
>   // TODO: does it ever make sense to allow sync when buffering or 
> applying buffered? Someone might request that we
>   // do it...
>   if (!(ulog.getState() == UpdateLog.State.ACTIVE || ulog.getState() == 
> UpdateLog.State.REPLAYING)) {
> log.error(msg() + "ERROR, update log not in ACTIVE or REPLAY state. " 
> + ulog);
> // return false;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15241 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15241/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestDimensionalRangeQuery

Error Message:
The test or suite printed 11714 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 11714 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:747)


FAILED:  org.apache.lucene.search.TestDimensionalRangeQuery.testAllEqual

Error Message:
Captured an uncaught exception in thread: Thread[id=819, name=T2, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=819, name=T2, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
Caused by: java.lang.AssertionError: T2: iter=14 id=7 docID=6 
value=4976449575468379731 (range: 4976449575468377891 TO 4976449575468380722) 
expected true but got: false deleted?=false 
query=DimensionalRangeQuery:field=sn_value:[[[B@2611ec8e] TO [[B@6e5c1692]]
at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)


FAILED:  
org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsMedium

Error Message:
Captured an uncaught exception in thread: Thread[id=829, name=T2, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=829, name=T2, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
Caused by: java.lang.AssertionError: T2: iter=0 id=882 docID=0 
value=4976449575468422113 (range: 4976449575468396109 TO 4976449575468445494) 
expected true but got: false deleted?=false 
query=DimensionalRangeQuery:field=ss_value:[[[B@48d97645] TO [[B@4ce6eb86]]
at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)




Build Log:
[...truncated 1393 lines...]
   [junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
   [junit4]   2> Dee 18, 2015 6:10:19 ALUULA 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T2,5,TGRP-TestDimensionalRangeQuery]
   [junit4]   2> java.lang.AssertionError: T2: iter=14 id=7 docID=6 
value=4976449575468379731 (range: 4976449575468377891 TO 4976449575468380722) 
expected true but got: false deleted?=false 
query=DimensionalRangeQuery:field=sn_value:[[[B@2611ec8e] TO [[B@6e5c1692]]
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
   [junit4]   2>at org.junit.Assert.fail(Assert.java:93)
   [junit4]   2>at 
org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
   [junit4]   2>at 
org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)
   [junit4]   2> 
   [junit4]   2> Dee 18, 2015 6:10:19 ALUULA 

Re: [JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 707 - Still Failing

2015-12-18 Thread Michael McCandless
I'll dig, looks fun!

Mike McCandless

http://blog.mikemccandless.com


On Fri, Dec 18, 2015 at 12:03 AM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/707/
>
> 1 tests failed.
> FAILED:  
> org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsTiny
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=308, name=T1, 
> state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=308, name=T1, state=RUNNABLE, 
> group=TGRP-TestDimensionalRangeQuery]
> at 
> __randomizedtesting.SeedInfo.seed([4A3C1C4DDC38015A:D6EA63A2F3FD523E]:0)
> Caused by: java.lang.RuntimeException: 
> java.lang.ArrayIndexOutOfBoundsException: 1
> at __randomizedtesting.SeedInfo.seed([4A3C1C4DDC38015A]:0)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:266)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
> at org.apache.lucene.util.bkd.BKDReader.addAll(BKDReader.java:280)
> at org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:352)
> at org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:271)
> at 
> org.apache.lucene.codecs.lucene60.Lucene60DimensionalReader.intersect(Lucene60DimensionalReader.java:100)
> at 
> org.apache.lucene.search.DimensionalRangeQuery$1.scorer(DimensionalRangeQuery.java:244)
> at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
> at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:593)
> at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:641)
> at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
> at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:667)
> at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:474)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:326)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:263)
>
>
>
>
> Build Log:
> [...truncated 635 lines...]
>[junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
>[junit4] IGNOR/A 0.00s J2 | TestDimensionalRangeQuery.testRandomLongsBig
>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>[junit4]   2> груд. 18, 2015 4:57:19 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[T1,5,TGRP-TestDimensionalRangeQuery]
>[junit4]   2> java.lang.RuntimeException: 
> java.lang.ArrayIndexOutOfBoundsException: 1
>[junit4]   2>at 
> __randomizedtesting.SeedInfo.seed([4A3C1C4DDC38015A]:0)
>[junit4]   2>at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:266)
>[junit4]   2> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>[junit4]   2>at 
> org.apache.lucene.util.bkd.BKDReader.addAll(BKDReader.java:280)
>[junit4]   2>at 
> org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:352)
>[junit4]   2>at 
> org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:271)
>[junit4]   2>at 
> org.apache.lucene.codecs.lucene60.Lucene60DimensionalReader.intersect(Lucene60DimensionalReader.java:100)
>[junit4]   2>at 
> org.apache.lucene.search.DimensionalRangeQuery$1.scorer(DimensionalRangeQuery.java:244)
>[junit4]   2>at 
> org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>[junit4]   2>at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:593)
>[junit4]   2>at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:641)
>[junit4]   2>at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
>[junit4]   2>at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
>[junit4]   2>at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:667)
>[junit4]   2>at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
>[junit4]   2>at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:474)
>[junit4]   2>at 
> 

Re: [JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 707 - Still Failing

2015-12-18 Thread Michael McCandless
OK this was just https://issues.apache.org/jira/browse/LUCENE-6936
again, which I just committed the fix for ...

Mike McCandless

http://blog.mikemccandless.com


On Fri, Dec 18, 2015 at 9:32 AM, Michael McCandless
 wrote:
> I'll dig, looks fun!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Dec 18, 2015 at 12:03 AM, Apache Jenkins Server
>  wrote:
>> Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/707/
>>
>> 1 tests failed.
>> FAILED:  
>> org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsTiny
>>
>> Error Message:
>> Captured an uncaught exception in thread: Thread[id=308, name=T1, 
>> state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]
>>
>> Stack Trace:
>> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
>> uncaught exception in thread: Thread[id=308, name=T1, state=RUNNABLE, 
>> group=TGRP-TestDimensionalRangeQuery]
>> at 
>> __randomizedtesting.SeedInfo.seed([4A3C1C4DDC38015A:D6EA63A2F3FD523E]:0)
>> Caused by: java.lang.RuntimeException: 
>> java.lang.ArrayIndexOutOfBoundsException: 1
>> at __randomizedtesting.SeedInfo.seed([4A3C1C4DDC38015A]:0)
>> at 
>> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:266)
>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>> at org.apache.lucene.util.bkd.BKDReader.addAll(BKDReader.java:280)
>> at org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:352)
>> at org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:271)
>> at 
>> org.apache.lucene.codecs.lucene60.Lucene60DimensionalReader.intersect(Lucene60DimensionalReader.java:100)
>> at 
>> org.apache.lucene.search.DimensionalRangeQuery$1.scorer(DimensionalRangeQuery.java:244)
>> at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>> at 
>> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:593)
>> at 
>> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:641)
>> at 
>> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
>> at 
>> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
>> at 
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:667)
>> at 
>> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
>> at 
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:474)
>> at 
>> org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:326)
>> at 
>> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:263)
>>
>>
>>
>>
>> Build Log:
>> [...truncated 635 lines...]
>>[junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
>>[junit4] IGNOR/A 0.00s J2 | TestDimensionalRangeQuery.testRandomLongsBig
>>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>>[junit4]   2> груд. 18, 2015 4:57:19 AM 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>>  uncaughtException
>>[junit4]   2> WARNING: Uncaught exception in thread: 
>> Thread[T1,5,TGRP-TestDimensionalRangeQuery]
>>[junit4]   2> java.lang.RuntimeException: 
>> java.lang.ArrayIndexOutOfBoundsException: 1
>>[junit4]   2>at 
>> __randomizedtesting.SeedInfo.seed([4A3C1C4DDC38015A]:0)
>>[junit4]   2>at 
>> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:266)
>>[junit4]   2> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>>[junit4]   2>at 
>> org.apache.lucene.util.bkd.BKDReader.addAll(BKDReader.java:280)
>>[junit4]   2>at 
>> org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:352)
>>[junit4]   2>at 
>> org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:271)
>>[junit4]   2>at 
>> org.apache.lucene.codecs.lucene60.Lucene60DimensionalReader.intersect(Lucene60DimensionalReader.java:100)
>>[junit4]   2>at 
>> org.apache.lucene.search.DimensionalRangeQuery$1.scorer(DimensionalRangeQuery.java:244)
>>[junit4]   2>at 
>> org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>>[junit4]   2>at 
>> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:593)
>>[junit4]   2>at 
>> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:641)
>>[junit4]   2>at 
>> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
>>[junit4]   2>at 
>> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
>>[junit4]   2>at 
>> 

Re: 5.3.2 bug fix release

2015-12-18 Thread david.w.smi...@gmail.com
I’d like to backport SOLR-8059 & SOLR-8060 (same as SOLR-8340): NPEs that
can happen in DebugComponent & HighlightComponent when
distrib.singlePass=true (which is implied under certain conditions even if
not explicitly =true).

On Thu, Dec 17, 2015 at 8:54 AM Jan Høydahl  wrote:

> If there is a 5.3.2 release, we should probably also backport this one:
>
> SOLR-8269 : Upgrade
> commons-collections to 3.2.2. This fixes a known serialization vulnerability
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 17. des. 2015 kl. 07.35 skrev Anshum Gupta :
>
> Yes, there was already a 5.3.2 version in JIRA. I will start back-porting
> stuff to the lucene_solr_5_3 branch later in the day today.
>
>
> On Thu, Dec 17, 2015 at 11:35 AM, Noble Paul  wrote:
>
>> Agree with Shawn here.
>>
>> If a company has already done the work to upgrade their systems to
>> 5.3.1 , they would rather have a bug fix for the old version .
>>
>> So anshum, is there a 5.3.2 version created in JIRA? can we start
>> tagging issues to that release so that we can have a definitive list
>> of bugs to be backported
>>
>> On Thu, Dec 17, 2015 at 10:27 AM, Anshum Gupta 
>> wrote:
>> > Thanks for explaining it so well Shawn :)
>> >
>> > Yes, that's pretty much the reason why it makes sense to have a 5.3.2
>> > release.
>> >
>> > We might even need a 5.4.1 after that as there are more security bug
>> fixes
>> > that are getting committed as the feature is being tried by users and
>> bugs
>> > are being reported.
>> >
>> > On Thu, Dec 17, 2015 at 3:28 AM, Shawn Heisey 
>> wrote:
>> >>
>> >> On 12/16/2015 2:15 PM, Upayavira wrote:
>> >> > Why don't people just upgrade to 5.4? Why do we need another release
>> in
>> >> > the 5.3.x range?
>> >>
>> >> I am using a third-party custom Solr plugin.  The latest version of
>> that
>> >> plugin (which I have on my dev server) has only been certified to work
>> >> with Solr 5.3.x.  There's a chance that it won't work with 5.4, so I
>> >> cannot use that version yet.  If I happen to need any of the fixes that
>> >> are being backported, an official 5.3.2 release would allow me to use
>> >> official binaries, which will make my managers much more comfortable
>> >> than a version that I compile myself.
>> >>
>> >> Additionally, the IT change policies in place for many businesses
>> >> require a huge amount of QA work for software upgrades, but those
>> >> policies may be relaxed for hotfixes and upgrades that are *only*
>> >> bugfixes.  For users operating under those policies, a bugfix release
>> >> will allow them to fix bugs immediately, rather than spend several
>> weeks
>> >> validating a new minor release.
>> >>
>> >> There is a huge amount of interest in the new security features in
>> >> 5.3.x, functionality that has a number of critical problems.  Lots of
>> >> users who need those features have already deployed 5.3.1.  Many of the
>> >> critical problems are fixed in 5.4, and these are the fixes that Anshum
>> >> wants to make available in 5.3.2.  If a user is in either of the
>> >> situations that I outlined above, upgrading to 5.4 may be unrealistic.
>> >>
>> >> Thanks,
>> >> Shawn
>> >>
>> >>
>> >> -
>> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> >> For additional commands, e-mail: dev-h...@lucene.apache.org
>> >>
>> >
>> >
>> >
>> > --
>> > Anshum Gupta
>>
>>
>>
>> --
>> -
>> Noble Paul
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> --
> Anshum Gupta
>
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-8407) We log an error during normal recovery PeerSync now.

2015-12-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8407:
--
Priority: Minor  (was: Major)

> We log an error during normal recovery PeerSync now.
> 
>
> Key: SOLR-8407
> URL: https://issues.apache.org/jira/browse/SOLR-8407
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.5, Trunk
>
>
> {noformat}
>   // TODO: does it ever make sense to allow sync when buffering or 
> applying buffered? Someone might request that we
>   // do it...
>   if (!(ulog.getState() == UpdateLog.State.ACTIVE || ulog.getState() == 
> UpdateLog.State.REPLAYING)) {
> log.error(msg() + "ERROR, update log not in ACTIVE or REPLAY state. " 
> + ulog);
> // return false;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8407) We log an error during normal recovery PeerSync now.

2015-12-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8407.
---
Resolution: Fixed

> We log an error during normal recovery PeerSync now.
> 
>
> Key: SOLR-8407
> URL: https://issues.apache.org/jira/browse/SOLR-8407
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.5, Trunk
>
>
> {noformat}
>   // TODO: does it ever make sense to allow sync when buffering or 
> applying buffered? Someone might request that we
>   // do it...
>   if (!(ulog.getState() == UpdateLog.State.ACTIVE || ulog.getState() == 
> UpdateLog.State.REPLAYING)) {
> log.error(msg() + "ERROR, update log not in ACTIVE or REPLAY state. " 
> + ulog);
> // return false;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15240 - Still Failing!

2015-12-18 Thread Michael McCandless
Hmm, I'll dig: I cutover this test to use dimensional values, maybe it
found something!

Mike McCandless

http://blog.mikemccandless.com


On Fri, Dec 18, 2015 at 9:52 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15240/
> Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
> -XX:-CompactStrings
>
> 3 tests failed.
> FAILED:  org.apache.lucene.facet.range.TestRangeFacetCounts.testRandomDoubles
>
> Error Message:
> expected:<2401> but was:<391>
>
> Stack Trace:
> java.lang.AssertionError: expected:<2401> but was:<391>
> at 
> __randomizedtesting.SeedInfo.seed([59543DC909A8E9C5:41F9F0CA8DD8BF03]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at org.junit.Assert.assertEquals(Assert.java:456)
> at 
> org.apache.lucene.facet.range.TestRangeFacetCounts.testRandomDoubles(TestRangeFacetCounts.java:801)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:520)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at java.lang.Thread.run(Thread.java:747)
>
>
> FAILED:  

[jira] [Commented] (SOLR-8435) Long update times Solr 5.3.1

2015-12-18 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064152#comment-15064152
 ] 

Erick Erickson commented on SOLR-8435:
--

Check that you aren't somehow building suggesters on commit.

> Long update times Solr 5.3.1
> 
>
> Key: SOLR-8435
> URL: https://issues.apache.org/jira/browse/SOLR-8435
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 5.3.1
> Environment: Ubuntu server 128Gb
>Reporter: Kenny Knecht
> Fix For: 5.2.1
>
>
> We have 2 128GB ubuntu servers in solr cloud config. We update by curling 
> json files of 20,000 documents. In 5.2.1 this consistently takes between 19 
> and 24 seconds. In 5.3.1 most times this takes 20s but in about 20% of the 
> files this takes much longer: up to 500s! Which files seems to be quite 
> random. Is this a known bug? any workaround? fixed in 5.4?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8437) Remove outdated RAMDirectory comment from example solrconfigs

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063965#comment-15063965
 ] 

Mark Miller commented on SOLR-8437:
---

Sounds okay to me.

> Remove outdated RAMDirectory comment from example solrconfigs
> -
>
> Key: SOLR-8437
> URL: https://issues.apache.org/jira/browse/SOLR-8437
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 5.5, Trunk
>
>
> There is a comment here in the solrconfig.xml file -
> {code}
>solr.RAMDirectoryFactory is memory based, not
>persistent, and doesn't work with replication.
> {code}
> This is outdated after SOLR-3911 . I tried recovering a replica manually as 
> well when they were using RAMDirectoryFactory and it worked just fine.
> So we should just get rid of that comment from all the example configs 
> shipped with solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6936) TestDimensionalRangeQuery failures: AIOOBE while merging

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063989#comment-15063989
 ] 

ASF subversion and git services commented on LUCENE-6936:
-

Commit 1720798 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1720798 ]

LUCENE-6936: don't try to merge dimentional values that have 0 points/documents 
(when all docs having dimensional values were deleted)

> TestDimensionalRangeQuery failures: AIOOBE while merging 
> -
>
> Key: LUCENE-6936
> URL: https://issues.apache.org/jira/browse/LUCENE-6936
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Steve Rowe
>Assignee: Michael McCandless
>
> From [http://jenkins.sarowe.net/job/Lucene-Solr-Nightly-trunk/105/] - neither 
> failure reproduced for me on the same box:
> {noformat}
>[junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestDimensionalRangeQuery -Dtests.method=testRandomLongsBig 
> -Dtests.seed=BEF1D45ADA12B09B -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=cs_CZ -Dtests.timezone=Africa/Porto-Novo -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   43.4s J5  | TestDimensionalRangeQuery.testRandomLongsBig 
> <<<
>[junit4]> Throwable #1: 
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([BEF1D45ADA12B09B:95C7B6D701973443]:0)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:714)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:728)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1459)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1242)
>[junit4]>at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:170)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.verifyLongs(TestDimensionalRangeQuery.java:208)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.doTestRandomLongs(TestDimensionalRangeQuery.java:147)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsBig(TestDimensionalRangeQuery.java:114)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1024
>[junit4]>at 
> org.apache.lucene.util.bkd.BKDWriter$MergeReader.next(BKDWriter.java:279)
>[junit4]>at 
> org.apache.lucene.util.bkd.BKDWriter.merge(BKDWriter.java:413)
>[junit4]>at 
> org.apache.lucene.codecs.lucene60.Lucene60DimensionalWriter.merge(Lucene60DimensionalWriter.java:159)
>[junit4]>at 
> org.apache.lucene.index.SegmentMerger.mergeDimensionalValues(SegmentMerger.java:168)
>[junit4]>at 
> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:117)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4062)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3642)
>[junit4]>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-Nightly-trunk/workspace/lucene/build/core/test/J5/temp/lucene.search.TestDimensionalRangeQuery_BEF1D45ADA12B09B-001
>[junit4]   2> Dec 15, 2015 11:03:38 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: Thread[Lucene Merge 
> Thread #634,5,TGRP-TestDimensionalRangeQuery]
>[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.ArrayIndexOutOfBoundsException: 1024
>[junit4]   2>at 
> __randomizedtesting.SeedInfo.seed([BEF1D45ADA12B09B]:0)
>[junit4]   2>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>[junit4]   2>at 
> 

[jira] [Commented] (SOLR-7525) Add ComplementStream to the Streaming API and Streaming Expressions

2015-12-18 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064009#comment-15064009
 ] 

Jason Gerlowski commented on SOLR-7525:
---

bq. the tuples in B aren't used for anything and are dropped as soon as 
possible. The reason they make use of the ReducerStream is because B having 1 
instance of some tuple found in A is the same as B having 100 instances of some 
tuple found in A. Whether its 1 or 100 the tuple exists in B so its twin in A 
can either be returned from A or not. For this reason the size of the 
ReducerStream can always just be 1

Ah, this makes sense now.  I was misreading {{ReducerStream}}.  That makes most 
of the rest of my comment invalid.  But, learn something new every day I 
guess...Looking forward to seeing your update to the patch, so I can get a 
better idea of how this should work.  Thanks for the clarification Dennis.

> Add ComplementStream to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-7525
> URL: https://issues.apache.org/jira/browse/SOLR-7525
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7525.patch
>
>
> This ticket adds a ComplementStream to the Streaming API and Streaming 
> Expression language.
> The ComplementStream will wrap two TupleStreams (StreamA, StreamB) and emit 
> Tuples from StreamA that are not in StreamB.
> Streaming API Syntax:
> {code}
> ComplementStream cstream = new ComplementStream(streamA, streamB, comp);
> {code}
> Streaming Expression syntax:
> {code}
> complement(search(...), search(...), on(...))
> {code}
> Internal implementation will rely on the ReducerStream. The ComplementStream 
> can be parallelized using the ParallelStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8364) SpellCheckComponentTest occasionally fails

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064086#comment-15064086
 ] 

ASF subversion and git services commented on SOLR-8364:
---

Commit 1720812 from jd...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720812 ]

SOLR-8364: fix test bug

> SpellCheckComponentTest occasionally fails
> --
>
> Key: SOLR-8364
> URL: https://issues.apache.org/jira/browse/SOLR-8364
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: Trunk
>Reporter: James Dyer
>Priority: Minor
> Attachments: SOLR-8364.patch
>
>
> This failure did not reproduce for me in Linux or Windows with the same seed.
> {quote}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5439/
> : Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
> : 
> : 1 tests failed.
> : FAILED:  org.apache.solr.handler.component.SpellCheckComponentTest.test
> : 
> : Error Message:
> : List size mismatch @ spellcheck/suggestions
> : 
> : Stack Trace:
> : java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8407) We log an error during normal recovery PeerSync now.

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064106#comment-15064106
 ] 

ASF subversion and git services commented on SOLR-8407:
---

Commit 1720814 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720814 ]

SOLR-8407: We log an error during normal recovery PeerSync now.

> We log an error during normal recovery PeerSync now.
> 
>
> Key: SOLR-8407
> URL: https://issues.apache.org/jira/browse/SOLR-8407
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Fix For: 5.5, Trunk
>
>
> {noformat}
>   // TODO: does it ever make sense to allow sync when buffering or 
> applying buffered? Someone might request that we
>   // do it...
>   if (!(ulog.getState() == UpdateLog.State.ACTIVE || ulog.getState() == 
> UpdateLog.State.REPLAYING)) {
> log.error(msg() + "ERROR, update log not in ACTIVE or REPLAY state. " 
> + ulog);
> // return false;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8407) We log an error during normal recovery PeerSync now.

2015-12-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8407:
--
 Assignee: Mark Miller
Fix Version/s: Trunk
   5.5

> We log an error during normal recovery PeerSync now.
> 
>
> Key: SOLR-8407
> URL: https://issues.apache.org/jira/browse/SOLR-8407
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.5, Trunk
>
>
> {noformat}
>   // TODO: does it ever make sense to allow sync when buffering or 
> applying buffered? Someone might request that we
>   // do it...
>   if (!(ulog.getState() == UpdateLog.State.ACTIVE || ulog.getState() == 
> UpdateLog.State.REPLAYING)) {
> log.error(msg() + "ERROR, update log not in ACTIVE or REPLAY state. " 
> + ulog);
> // return false;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15240 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15240/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  org.apache.lucene.facet.range.TestRangeFacetCounts.testRandomDoubles

Error Message:
expected:<2401> but was:<391>

Stack Trace:
java.lang.AssertionError: expected:<2401> but was:<391>
at 
__randomizedtesting.SeedInfo.seed([59543DC909A8E9C5:41F9F0CA8DD8BF03]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.facet.range.TestRangeFacetCounts.testRandomDoubles(TestRangeFacetCounts.java:801)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:747)


FAILED:  org.apache.lucene.facet.range.TestRangeFacetCounts.testRandomLongs

Error Message:
expected:<3060> but was:<549>

Stack Trace:
java.lang.AssertionError: expected:<3060> but was:<549>
at 
__randomizedtesting.SeedInfo.seed([59543DC909A8E9C5:EE806905C61B4E16]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at 

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_66) - Build # 5351 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5351/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/_und/rz", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/_und/rz",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"}
at 
__randomizedtesting.SeedInfo.seed([27F387908900F97C:FFBEAAC77EDD5CDC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8317) add responseHeader and response accessors to SolrQueryResponse

2015-12-18 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8317:
--
Attachment: SOLR-8317-part1of2.patch

> add responseHeader and response accessors to SolrQueryResponse
> --
>
> Key: SOLR-8317
> URL: https://issues.apache.org/jira/browse/SOLR-8317
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8317-part1of2.patch, SOLR-8317.patch
>
>
> To make code easier to understand and modify. Proposed patch against trunk to 
> follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8364) SpellCheckComponentTest occasionally fails

2015-12-18 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-8364.
--
   Resolution: Fixed
 Assignee: James Dyer
Fix Version/s: 5.5

I committed the patch.  I did not create CHANGES.txt entries as this is a 
test-only fix.

> SpellCheckComponentTest occasionally fails
> --
>
> Key: SOLR-8364
> URL: https://issues.apache.org/jira/browse/SOLR-8364
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: Trunk
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 5.5
>
> Attachments: SOLR-8364.patch
>
>
> This failure did not reproduce for me in Linux or Windows with the same seed.
> {quote}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5439/
> : Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
> : 
> : 1 tests failed.
> : FAILED:  org.apache.solr.handler.component.SpellCheckComponentTest.test
> : 
> : Error Message:
> : List size mismatch @ spellcheck/suggestions
> : 
> : Stack Trace:
> : java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-12-18 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-8230.

   Resolution: Fixed
Fix Version/s: 5.5

Committed.  Thanks Michael!
I also added a simple test to just test for the presence of "facet-info" and 
also randomly added it in the main TestJsonFacets test just to ensure that it 
didn't cause exceptions or other issues for all the various facet types.

>From a style perspective, I also moved the license to the top of the new file.

> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch, 
> SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.
> Here is an example of telemetry returned from query. 
> Query
> {code}
> curl http://localhost:8228/solr/films/select -d 
> 'q=*:*=json=true=true={
> top_genre: {
>   type:terms,
>   field:genre,
>   numBucket:true,
>   limit:2,
>   facet: {
> top_director: {
> type:terms,
> field:directed_by,
> numBuckets:true,
> limit:2
> },
> first_release: {
> type:terms,
> field:initial_release_date,
> sort:{index:asc},
> numBuckets:true,
> limit:2
> }
>   }
> }
> }'
> {code}
> Telemetry returned (inside debug part)
> {code}
> "facet-trace":{
>   "processor":"FacetQueryProcessor",
>   "elapse":1,
>   "query":null,
>   "sub-facet":[{
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":1,
>   "field":"genre",
>   "limit":2,
>   "sub-facet":[{
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2}]}]},
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15242 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15242/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.lucene.search.TestDimensionalRangeQuery.testBasicSortedSet

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([59C2BC07FF545D87:3F3C105C63D72DF2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.search.TestDimensionalRangeQuery.testBasicSortedSet(TestDimensionalRangeQuery.java:774)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.lucene.search.TestDimensionalRangeQuery.testRandomBinaryMedium

Error Message:
Captured an uncaught exception in thread: Thread[id=1321, name=T0, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=1321, name=T0, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
at 

[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064292#comment-15064292
 ] 

Mark Miller commented on LUCENE-6937:
-

Lot's of projects at Apache have already migrated, so other than how we clean 
up our svn-git migration, none of this will be new ground.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8443) Change /stream handler http param from "stream" to "func"

2015-12-18 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064290#comment-15064290
 ] 

Dennis Gove edited comment on SOLR-8443 at 12/18/15 5:35 PM:
-

If open to other suggestions, I find that I tend to refer to that parameter as 
the expression. Maybe expr=search().

My thinking here is that one is providing a (potentially complex) expression 
made up of function calls.


was (Author: dpgove):
If open to other suggestions, I find that I tend to refer to that parameter as 
the expression. Maybe expr=search()

> Change /stream handler http param from "stream" to "func"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "func". 
> This syntax also helps to emphasize that Streaming Expressions are a function 
> language.
> For example:
> http://localhost:8983/collection1/stream?func=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8444) Combine facet telemetry information from shards

2015-12-18 Thread Michael Sun (JIRA)
Michael Sun created SOLR-8444:
-

 Summary: Combine facet telemetry information from shards
 Key: SOLR-8444
 URL: https://issues.apache.org/jira/browse/SOLR-8444
 Project: Solr
  Issue Type: Sub-task
Reporter: Michael Sun






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8443) Change /stream handler http param from "stream" to "expr"

2015-12-18 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8443:
-
Description: 
When passing in a Streaming Expression to the /stream handler you currently use 
the "stream" http parameter. This dates back to when serialized TupleStream 
objects were passed in. Now that the /stream handler only accepts Streaming 
Expressions it makes sense to rename this parameter to "expr". 

For example:

http://localhost:8983/collection1/stream?expr=search(...)



  was:
When passing in a Streaming Expression to the /stream handler you currently use 
the "stream" http parameter. This dates back to when serialized TupleStream 
objects were passed in. Now that the /stream handler only accepts Streaming 
Expressions it makes sense to rename this parameter to "expr". 

This syntax also helps to emphasize that Streaming Expressions are a function 
language.

For example:

http://localhost:8983/collection1/stream?expr=search(...)




> Change /stream handler http param from "stream" to "expr"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "expr". 
> For example:
> http://localhost:8983/collection1/stream?expr=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064319#comment-15064319
 ] 

Uwe Schindler edited comment on LUCENE-6937 at 12/18/15 5:53 PM:
-

bq. This will complicate github mirror integration as there are existing forks 
of it already, etc. My opinion is that we should replace it because it's not a 
complete mirror anyway.

+1. This only causes issues for people that have forks or checkouts.

What should we do with https://github.com/apache/solr/tree/trunk and 
https://github.com/apache/lucene/tree/trunk ?

Those are the old pre-lusolr-merge SVN repos. So basically, Dawid does not need 
to clone them anyways, we can leave what exists there. It looks like it is 
complete. Although the trunk branch should be renamed to "master" (or github's 
config changed), because currently it shows the wrong ones if you go to repo's 
homepage (in case of solr it shows version 1.1, because this is the 
alphabetically first branch, for lucene its interestingly correct).


was (Author: thetaphi):
bq. This will complicate github mirror integration as there are existing forks 
of it already, etc. My opinion is that we should replace it because it's not a 
complete mirror anyway.

+1. This only causes issues for people that have forks or checkouts.

What should we do with https://github.com/apache/solr/tree/trunk and 
https://github.com/apache/lucene/tree/trunk ?

Those are the old pre-lusolr-merge SVN repos. So basically, Dawid does not need 
to clone them anyways, we can leave what exists there. It looks like it is 
complete. Although the trunk branch should be renamed to "master" (or github's 
config changed), because currently it shows the wrong ones if you go to repo's 
homepage.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064353#comment-15064353
 ] 

Dawid Weiss commented on LUCENE-6933:
-

I can go down to git repo size of 160mb by removing any of these files (not 
currently used on any of the active branches):
{code}
*.mem
*.dat
*.war
*.zip
{code}
These are mostly precompiled automata, etc. Current blobs (in any of branch_x 
and master) are not affected, but tags are. Don't know if it makes sense.


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6933:

Attachment: migration.txt

SVN-git merging procedure (outline). For historical reference.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-18 Thread Dawid Weiss
I've made some comments about the conversion process here:
https://issues.apache.org/jira/browse/LUCENE-6933?focusedCommentId=15064208#comment-15064208

Feel free to try it out.
https://github.com/dweiss/lucene-solr-svn2git

I don't know what the next steps are. This looks like a good starting point
to switch over to git with all the development? The only thing I still plan
on doing is getting rid of a few large binary blobs in historical
resources, but even without it this seems acceptable size-wise (~200mb).

Dawid



On Thu, Dec 17, 2015 at 9:13 AM, Dawid Weiss  wrote:

>
> > The question I had (I am sure a very dumb one): WHY do we care about history
> preserved perfectly in Git?
>
> For me it's for sentimental, archival and task-challenge reasons. Robert's
> requirement is that git praise/blame/log works and on a given file and
> shows its true history of changes. Everyone has his own reasons I guess. If
> the initial clone is small enough then I see no problem in keeping the
> history if we can preserve it.
>
> Dawid
>
>
>
> On Thu, Dec 17, 2015 at 4:52 AM, david.w.smi...@gmail.com <
> david.w.smi...@gmail.com> wrote:
>
>> +1 totally agree.  Any way; the bloat should largely be the binaries &
>> unrelated projects, not code (small text files).
>>
>> On Wed, Dec 16, 2015 at 10:36 PM Doug Turnbull <
>> dturnb...@opensourceconnections.com> wrote:
>>
>>> In defense of more history immediately available--it is often far more
>>> useful to poke around code history/run blame to figure out some code than
>>> by taking it at face value. Putting this in a secondary place like
>>> Apache SVN repo IMO reduces the readability of the code itself. This is
>>> doubly true for new developers that won't know about Apache's SVN. And
>>> Lucene can be quite intricate code. Further in my own work poking around in
>>> github mirrors I frequently hit the current cutoff. Which is one reason I
>>> stopped using them for anything but the casual investigation.
>>>
>>> I'm not totally against a cutoff point, but I'd advocate for exhausting
>>> other options first, such as trimming out unrelated projects, binaries, etc.
>>>
>>> -Doug
>>>
>>>
>>> On Wednesday, December 16, 2015, Shawn Heisey 
>>> wrote:
>>>
 On 12/16/2015 5:53 PM, Alexandre Rafalovitch wrote:
 > On 16 December 2015 at 00:44, Dawid Weiss 
 wrote:
 >> 4) The size of JARs is really not an issue. The entire SVN repo I
 mirrored
 >> locally (including empty interim commits to cater for
 svn:mergeinfos) is 4G.
 >> If you strip the stuff like javadocs and side projects (Nutch, Tika,
 Mahout)
 >> then I bet the entire history can fit in 1G total. Of course
 stripping JARs
 >> is also doable.
 > I think this answered one of the issues. So, this is not something to
 focus on.
 >
 > The question I had (I am sure a very dumb one): WHY do we care about
 > history preserved perfectly in Git? Because that seems to be the real
 > bottleneck now. Does anybody still checks out an intermediate commit
 > in Solr 1.4 branch?

 I do not think we need every bit of history -- at least in the primary
 read/write repository.  I wonder how much of a size difference there
 would be between tossing all history before 5.0 and tossing all history
 before the ivy transition was completed.

 In the interests of reducing the size and download time of a clone
 operation, I definitely think we should trim history in the main repo to
 some arbitrary point, as long as the full history is available
 elsewhere.  It's my understanding that it will remain in svn.apache.org
 (possibly forever), and I think we could also create "historical"
 read-only git repos.

 Almost every time I am working on the code, I only care about the stable
 branch and trunk.  Sometimes I will check out an older 4.x tag so I can
 see the exact code referenced by a stacktrace in a user's error message,
 but when this is required, I am willing to go to an entirely different
 repository and chew up bandwidth/disk resourcesto obtain it, and I do
 not care whether it is git or svn.  As time marches on, fewer people
 will have reasons to look at the historical record.

 Thanks,
 Shawn


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


>>> --
>>> *Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections
>>> , LLC | 240.476.9983
>>> Author: Relevant Search 
>>> This e-mail and all contents, including attachments, is considered to be
>>> Company Confidential unless explicitly stated otherwise, regardless
>>> of whether attachments are marked as such.
>>>
>>> 

[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2015-12-18 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064219#comment-15064219
 ] 

Arcadius Ahouansou commented on SOLR-8146:
--

Thank you very much [~noble.paul].
I will have a look into {{snitch}}

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15241 - Still Failing!

2015-12-18 Thread Michael McCandless
Woops, I'll fix ;)

Mike McCandless

http://blog.mikemccandless.com


On Fri, Dec 18, 2015 at 11:11 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15241/
> Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
> -XX:-CompactStrings
>
> 3 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.search.TestDimensionalRangeQuery
>
> Error Message:
> The test or suite printed 11714 bytes to stdout and stderr, even though the 
> limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
> completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
>
> Stack Trace:
> java.lang.AssertionError: The test or suite printed 11714 bytes to stdout and 
> stderr, even though the limit was set to 8192 bytes. Increase the limit with 
> @Limit, ignore it completely with @SuppressSysoutChecks or run with 
> -Dtests.verbose=true
> at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
> at 
> org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
> at 
> com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at java.lang.Thread.run(Thread.java:747)
>
>
> FAILED:  org.apache.lucene.search.TestDimensionalRangeQuery.testAllEqual
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=819, name=T2, 
> state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=819, name=T2, state=RUNNABLE, 
> group=TGRP-TestDimensionalRangeQuery]
> Caused by: java.lang.AssertionError: T2: iter=14 id=7 docID=6 
> value=4976449575468379731 (range: 4976449575468377891 TO 4976449575468380722) 
> expected true but got: false deleted?=false 
> query=DimensionalRangeQuery:field=sn_value:[[[B@2611ec8e] TO [[B@6e5c1692]]
> at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)
>
>
> FAILED:  
> org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsMedium
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=829, name=T2, 
> state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=829, name=T2, state=RUNNABLE, 
> group=TGRP-TestDimensionalRangeQuery]
> Caused by: java.lang.AssertionError: T2: iter=0 id=882 docID=0 
> value=4976449575468422113 (range: 4976449575468396109 TO 4976449575468445494) 
> expected true but got: false deleted?=false 
> query=DimensionalRangeQuery:field=ss_value:[[[B@48d97645] TO [[B@4ce6eb86]]
> at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)
>
>
>
>
> Build Log:
> [...truncated 1393 lines...]
>[junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
>[junit4]   2> Dee 18, 2015 6:10:19 ALUULA 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[T2,5,TGRP-TestDimensionalRangeQuery]
>[junit4]   2> java.lang.AssertionError: T2: iter=14 id=7 docID=6 
> value=4976449575468379731 (range: 4976449575468377891 TO 4976449575468380722) 
> expected true but got: false deleted?=false 
> query=DimensionalRangeQuery:field=sn_value:[[[B@2611ec8e] TO [[B@6e5c1692]]
>[junit4]   2>at 
> __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
>[junit4]   2>at org.junit.Assert.fail(Assert.java:93)
>[junit4]   2>  

[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064270#comment-15064270
 ] 

Robert Muir commented on LUCENE-6933:
-

Is it still expected that there still a problem for lucene core/ history?

E.G. here is indexwriter: 
https://github.com/dweiss/lucene-solr-svn2git/commits/master/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java?page=8


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064287#comment-15064287
 ] 

Dawid Weiss commented on LUCENE-6937:
-

As for infra, technically this should be easy -- set up git repo, clone 
--mirror the one I uploaded to github... Legally -- I don't know. The infras 
could retrace everything I did to ensure consistency with SVN, but I see little 
point in doing this (takes awful amount of time and some quirky knowledge).

Also, don't know whether we can/ should just remove/ replace the existing git 
clone at:
git://git.apache.org/lucene-solr.git

This will complicate github mirror integration as there are existing forks of 
it already, etc. My opinion is that we should replace it because it's not a 
complete mirror anyway.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064297#comment-15064297
 ] 

Mark Miller commented on LUCENE-6937:
-

bq. The infras could retrace everything I did to ensure consistency with SVN, 
but I see little point in doing this (takes awful amount of time and some 
quirky knowledge).

I don't think they will be very interested in doing those things. Just getting 
our Git repo setup at Apache and our link to GitHub setup. They are really only 
going to be interested in touching the things we cannot most likely.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064326#comment-15064326
 ] 

Dawid Weiss commented on LUCENE-6937:
-

These are obsolete repos. Frankly, I'd just remove them, they're just confusing 
to people, most likely.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-18 Thread Tom Hill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Hill updated SOLR-8420:
---
Attachment: 0001-Fix-overflow-in-date-statistics.patch

Fixes overflow in stddev, too.

Not ready to commit. I still have to fix a rounding error in TestDistributed.

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064347#comment-15064347
 ] 

Uwe Schindler commented on LUCENE-6937:
---

The messaage is there: https://github.com/apache/solr/tree/trunk

The problem is that github opens the old 1.1 release branch because there is no 
"master". And "1.1" is the first in alphabetical order.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 885 - Still Failing

2015-12-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/885/

1 tests failed.
FAILED:  
org.apache.lucene.search.suggest.document.TestSuggestField.testSuggestOnMostlyDeletedDocuments

Error Message:
MockDirectoryWrapper: cannot close: there are still open files: 
{_y_completion_0.pay=1, _y_completion_0.tim=1, _y.dim=1, _y.fdt=1, 
_y_completion_0.pos=1, _y.nvd=1, _y_completion_0.doc=1, _y_completion_0.lkp=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
open files: {_y_completion_0.pay=1, _y_completion_0.tim=1, _y.dim=1, _y.fdt=1, 
_y_completion_0.pos=1, _y.nvd=1, _y_completion_0.doc=1, _y_completion_0.lkp=1}
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:771)
at 
org.apache.lucene.search.suggest.document.TestSuggestField.after(TestSuggestField.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:929)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: unclosed IndexInput: _y_completion_0.pos
at 
org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:659)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:703)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:88)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:443)
at 
org.apache.lucene.search.suggest.document.CompletionFieldsProducer.(CompletionFieldsProducer.java:92)

[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064223#comment-15064223
 ] 

Michael McCandless commented on SOLR-7865:
--

Thanks [~arcadius], your patch looks great!  I'll run tests and commit 
shortly...

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064301#comment-15064301
 ] 

Robert Muir commented on LUCENE-6933:
-

Thanks Dawid, i installed the chrome extension 
(https://chrome.google.com/webstore/detail/github-follow/agalokjhnhheienloigiaoohgmjdpned/)
 which seems to work.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064362#comment-15064362
 ] 

ASF subversion and git services commented on SOLR-7865:
---

Commit 1720831 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720831 ]

SOLR-7865: BlendedInfixSuggester was returning more results than requested

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8317) add responseHeader and response accessors to SolrQueryResponse

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064174#comment-15064174
 ] 

ASF subversion and git services commented on SOLR-8317:
---

Commit 1720822 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1720822 ]

SOLR-8317: add responseHeader and response accessors to SolrQueryResponse. 
TestSolrQueryResponse tests for accessors.

> add responseHeader and response accessors to SolrQueryResponse
> --
>
> Key: SOLR-8317
> URL: https://issues.apache.org/jira/browse/SOLR-8317
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8317-part1of2.patch, SOLR-8317.patch
>
>
> To make code easier to understand and modify. Proposed patch against trunk to 
> follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064179#comment-15064179
 ] 

ASF subversion and git services commented on SOLR-8230:
---

Commit 1720824 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720824 ]

SOLR-8230: JSON Facet API: add facet-info to debug when debugQuery=true

> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch, 
> SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.
> Here is an example of telemetry returned from query. 
> Query
> {code}
> curl http://localhost:8228/solr/films/select -d 
> 'q=*:*=json=true=true={
> top_genre: {
>   type:terms,
>   field:genre,
>   numBucket:true,
>   limit:2,
>   facet: {
> top_director: {
> type:terms,
> field:directed_by,
> numBuckets:true,
> limit:2
> },
> first_release: {
> type:terms,
> field:initial_release_date,
> sort:{index:asc},
> numBuckets:true,
> limit:2
> }
>   }
> }
> }'
> {code}
> Telemetry returned (inside debug part)
> {code}
> "facet-trace":{
>   "processor":"FacetQueryProcessor",
>   "elapse":1,
>   "query":null,
>   "sub-facet":[{
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":1,
>   "field":"genre",
>   "limit":2,
>   "sub-facet":[{
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2}]}]},
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 269 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/269/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsMedium

Error Message:
Captured an uncaught exception in thread: Thread[id=350, name=T1, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=350, name=T1, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
Caused by: java.lang.AssertionError: T1: iter=0 id=3839 docID=4 
value=-6512425785367661192 (range: -7507609947746435620 TO 2010628042616866333) 
expected true but got: false deleted?=false 
query=DimensionalRangeQuery:field=sn_value:[[[B@23428fab] TO [[B@277d0f1b]]
at __randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)


FAILED:  org.apache.lucene.search.TestDimensionalRangeQuery.testAllEqual

Error Message:
Captured an uncaught exception in thread: Thread[id=358, name=T3, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=358, name=T3, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
Caused by: java.lang.AssertionError: T3: iter=1 id=9238 docID=0 
value=-8430619521360666154 (range: -8704842830894929849 TO 
-2114881473162340490) expected true but got: false deleted?=false 
query=DimensionalRangeQuery:field=ss_value:[[[B@1db6c503] TO [[B@72cda9ca]]
at __randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)


FAILED:  
org.apache.lucene.search.TestDimensionalRangeQuery.testRandomBinaryMedium

Error Message:
Captured an uncaught exception in thread: Thread[id=361, name=T0, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=361, name=T0, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
at 
__randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7:E04076331AA79D70]:0)
Caused by: java.lang.AssertionError: 472 hits were wrong
at __randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$2._run(TestDimensionalRangeQuery.java:637)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$2.run(TestDimensionalRangeQuery.java:533)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestDimensionalRangeQuery

Error Message:
The test or suite printed 206572 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 206572 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1685 lines...]
   [junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
   [junit4]   2> des. 18, 2015 11:56:09 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: 

[jira] [Assigned] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned SOLR-7865:


Assignee: Michael McCandless

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8443) Change /stream handler http param from "stream" to "func"

2015-12-18 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064290#comment-15064290
 ] 

Dennis Gove commented on SOLR-8443:
---

If open to other suggestions, I find that I tend to refer to that parameter as 
the expression. Maybe expr=search()

> Change /stream handler http param from "stream" to "func"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "func". 
> This syntax also helps to emphasize that Streaming Expressions are a function 
> language.
> For example:
> http://localhost:8983/collection1/stream?func=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8412) SchemaManager should synchronize on performOperations method

2015-12-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064306#comment-15064306
 ] 

Yonik Seeley commented on SOLR-8412:


bq. We should synzhronize on performOperations instead. The net affect will be 
the same but the code will be more clear.

Changing complex synchronization causes warning bells to go off.
Are you sure that the net effect is the same?  I'm not familiar with this part 
of the code, so hopefully someone else who is can chime in... but at first 
blush it definitely doesn't look safe.
This patch changes the locking from using schemaUpdateLock (which is shared 
among multiple objects) to using either schemaUpdateLock or the current objects 
monitor.  It's certainly not simpler or clearer to try and figure out if things 
are still thread safe.

Reviewing the existing code some, I see this:
- SchemaManager.performOperations() calls doOperations() protected by 
schemaUpdateLock
  - this performs a list of operations on the latest ManagedIndexSchema object, 
which *may* be created fresh, but will be passed the same schemaUpdateLock
  - these operations can call things like addFields()

AddSchemaFieldsUpdateProcessor has this:
{code}
// Need to hold the lock during the entire attempt to ensure that
// the schema on the request is the latest
synchronized (oldSchema.getSchemaUpdateLock()) {
  try {
IndexSchema newSchema = oldSchema.addFields(newFields);
{code}
But with the patch, we're locking on a different object, so what the comment 
asserts it is trying to do may be broken?
Actually, it's not at all clear to me why even in the current code, we don't 
need to grab the latest schema again *after* we lock the update lock.

Moving on, to addFields(), it looks like it can (with the patch) now be called 
on the same object with two different locks held.  Or even on different objects 
it's not clear if it's now safe.

Bottom line: the synchronization in the current code is complex enough that I 
don't know if the proposed simplifications are safe or not.  If you could add 
some explanation around that, it would be great.


> SchemaManager should synchronize on performOperations method
> 
>
> Key: SOLR-8412
> URL: https://issues.apache.org/jira/browse/SOLR-8412
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-8412.patch, SOLR-8412.patch, SOLR-8412.patch
>
>
> Currently SchemaManager synchronizes on {{schema.getSchemaUpdateLock()}} . We 
> should synzhronize on {{performOperations}} instead. 
> The net affect will be the same but the code will be more clear. 
> {{schema.getSchemaUpdateLock()}} is used when you want to edit a schema and 
> add one field at a time. But the way SchemaManager works is that it does bulk 
> operations i.e performs all operations and then persists the final schema . 
> If there were two concurrent operations that took place, the later operation 
> will retry by fetching the latest schema .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064319#comment-15064319
 ] 

Uwe Schindler commented on LUCENE-6937:
---

bq. This will complicate github mirror integration as there are existing forks 
of it already, etc. My opinion is that we should replace it because it's not a 
complete mirror anyway.

+1. This only causes issues for people that have forks or checkouts.

What should we do with https://github.com/apache/solr/tree/trunk and 
https://github.com/apache/lucene/tree/trunk ?

Those are the old pre-lusolr-merge SVN repos. So basically, Dawid does not need 
to clone them anyways, we can leave what exists there. It looks like it is 
complete. Although the trunk branch should be renamed to "master" (or github's 
config changed), because currently it shows the wrong ones if you go to repo's 
homepage.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064338#comment-15064338
 ] 

Dawid Weiss commented on LUCENE-6937:
-

Well, you could just commit a similar message to solr's old repo folder -- if 
it gets synced up it'd show the same message on github. But honestly, I don't 
think it's worth it (I'd just ask github to close the mirror of these two old 
branches).

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >