Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13358 - Failure!

2015-07-08 Thread Rory O'Donnell

Hi Uwe,Dawid,

I should have mentioned a fix for JDK-8086046 is also included in b71.

Rgds,Rory

On 07/07/2015 11:51, Rory O'Donnell wrote:

Hi Uwe,Dawid,

b71 is now available on java.net

Rgds,Rory

On 07/07/2015 11:35, Dawid Weiss wrote:

Hi Roland,

It's Uwe (Schindler's) jenkins so we'll keep an eye on this when he
installs the new build and let you know, thanks!

Dawid


On Tue, Jul 7, 2015 at 12:08 PM, Roland Westrelin
roland.westre...@oracle.com wrote:

Hi Dawid,


Another failure related to this issue:
https://bugs.openjdk.java.net/browse/JDK-8129831

This looks like a duplicate of:

https://bugs.openjdk.java.net/browse/JDK-8086016

(that you can’t see). It was integrated in b71. Do you know if your 
issue is reproduceable with b71?


Roland.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org





--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13358 - Failure!

2015-07-08 Thread Uwe Schindler
Hi,

 

I gave b71 a try. It’s now in the build chain, we will see how it behaves.

 

Uwe

 

-

Uwe Schindler

uschind...@apache.org 

ASF Member, Apache Lucene PMC / Committer

Bremen, Germany

http://lucene.apache.org/

 

From: Rory O'Donnell [mailto:rory.odonn...@oracle.com] 
Sent: Wednesday, July 08, 2015 11:04 AM
To: dev@lucene.apache.org; Roland Westrelin
Cc: rory.odonn...@oracle.com; Uwe Schindler; Dalibor Topic; Balchandra Vaidya; 
Vivek Theeyarath
Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build 
# 13358 - Failure!

 

Hi Uwe,Dawid, 

I should have mentioned a fix for JDK-8086046 is also included in b71.

Rgds,Rory

On 07/07/2015 11:51, Rory O'Donnell wrote:

Hi Uwe,Dawid, 

b71 is now available on java.net 

Rgds,Rory 

On 07/07/2015 11:35, Dawid Weiss wrote: 



Hi Roland, 

It's Uwe (Schindler's) jenkins so we'll keep an eye on this when he 
installs the new build and let you know, thanks! 

Dawid 


On Tue, Jul 7, 2015 at 12:08 PM, Roland Westrelin 
 mailto:roland.westre...@oracle.com roland.westre...@oracle.com wrote: 



Hi Dawid, 




Another failure related to this issue: 
https://bugs.openjdk.java.net/browse/JDK-8129831 

This looks like a duplicate of: 

https://bugs.openjdk.java.net/browse/JDK-8086016 

(that you can’t see). It was integrated in b71. Do you know if your issue is 
reproduceable with b71? 

Roland. 

- 
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
For additional commands, e-mail: dev-h...@lucene.apache.org 

 





-- 
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland 


[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by filters

2015-07-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618512#comment-14618512
 ] 

Michael McCandless commented on LUCENE-6667:


bq. when you insert new tokens, restore the state instead of clearAttributes()

But e.g. if syn filter matched domain name system and wants to insert dns 
which token's attributes is it supposed to clone for the dns token?

 Custom attributes get cleared by filters
 

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6353) Let Range Facets Hang off of Pivots

2015-07-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6353.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.3

Thanks Steve and Hoss!

 Let Range Facets Hang off of Pivots
 ---

 Key: SOLR-6353
 URL: https://issues.apache.org/jira/browse/SOLR-6353
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Shalin Shekhar Mangar
 Fix For: 5.3, Trunk


 Conceptually very similar to the previous sibling issues about hanging stats 
 of pivots  ranges: using a tag on {{facet.range}} requests, we make it 
 possible to hang a range off the nodes of Pivots.
 Example...
 {noformat}
 facet.pivot={!range=r1}category,manufacturer
 facet.range={tag=r1}price
 {noformat}
 ...with the request above, in addition to computing range facets over the 
 price field for the entire result set, the PivotFacet component will also 
 include all of those ranges for every node of the tree it builds up when 
 generating a pivot of the fields category,manufacturer
 This should easily be combinable with the other sibling tasks to hang stats 
 off ranges which hang off pivots. (see parent issue for example)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6667) Custom attributes get cleared by SynonymFilter

2015-07-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6667:
--
Summary: Custom attributes get cleared by SynonymFilter  (was: Custom 
attributes get cleared by filters)

 Custom attributes get cleared by SynonymFilter
 --

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by filters

2015-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618515#comment-14618515
 ] 

Uwe Schindler commented on LUCENE-6667:
---

bq. But e.g. if syn filter matched domain name system and wants to insert 
dns which token's attributes is it supposed to clone for the dns token?

That's the problem with the multi word synonyms... It has to be defined (first, 
last,...). But I am not sure what the right thing to do is!

 Custom attributes get cleared by filters
 

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6582) SynonymFilter should generate a correct (or, at least, better) graph

2015-07-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618511#comment-14618511
 ] 

Michael McCandless commented on LUCENE-6582:


bq. I'm going to now try to simplify SynFilter by removing the hairy graph 
flattening it must do today

I opened LUCENE-6664 for this... it seems to work!

 SynonymFilter should generate a correct (or, at least, better) graph
 

 Key: LUCENE-6582
 URL: https://issues.apache.org/jira/browse/LUCENE-6582
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ian Ribas
 Attachments: LUCENE-6582.patch, LUCENE-6582.patch, LUCENE-6582.patch, 
 after.png, after2.png, after3.png, before.png


 Some time ago, I had a problem with synonyms and phrase type queries 
 (actually, it was elasticsearch and I was using a match query with multiple 
 terms and the and operator, as better explained here: 
 https://github.com/elastic/elasticsearch/issues/10394).
 That issue led to some work on Lucene: LUCENE-6400 (where I helped a little 
 with tests) and  LUCENE-6401. This issue is also related to LUCENE-3843.
 Starting from the discussion on LUCENE-6400, I'm attempting to implement a 
 solution. Here is a patch with a first step - the implementation to fix 
 SynFilter to be able to 'make positions' (as was mentioned on the 
 [issue|https://issues.apache.org/jira/browse/LUCENE-6400?focusedCommentId=14498554page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14498554]).
  In this way, the synonym filter generates a correct (or, at least, better) 
 graph.
 As the synonym matching is greedy, I only had to worry about fixing the 
 position length of the rules of the current match, no future or past synonyms 
 would span over this match (please correct me if I'm wrong!). It did 
 require more buffering, twice as much.
 The new behavior I added is not active by default, a new parameter has to be 
 passed in a new constructor for {{SynonymFilter}}. The changes I made do 
 change the token stream generated by the synonym filter, and I thought it 
 would be better to let that be a voluntary decision for now.
 I did some refactoring on the code, but mostly on what I had to change for 
 may implementation, so that the patch was not too hard to read. I created 
 specific unit tests for the new implementation 
 ({{TestMultiWordSynonymFilter}}) that should show how things will be with the 
 new behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by SynonymFilter

2015-07-08 Thread Oliver Becker (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618517#comment-14618517
 ] 

Oliver Becker commented on LUCENE-6667:
---

Ah, ok, now I understand. Yes, this happens for tokens that will be replaced by 
the synonym filter, i.e. for inserted tokens (and it is a single token 
replacement). But I see the problem.

 Custom attributes get cleared by SynonymFilter
 --

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6353) Let Range Facets Hang off of Pivots

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618420#comment-14618420
 ] 

ASF subversion and git services commented on SOLR-6353:
---

Commit 1689841 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689841 ]

SOLR-4212: SOLR-6353: Added attribution in changes.txt

 Let Range Facets Hang off of Pivots
 ---

 Key: SOLR-6353
 URL: https://issues.apache.org/jira/browse/SOLR-6353
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Shalin Shekhar Mangar

 Conceptually very similar to the previous sibling issues about hanging stats 
 of pivots  ranges: using a tag on {{facet.range}} requests, we make it 
 possible to hang a range off the nodes of Pivots.
 Example...
 {noformat}
 facet.pivot={!range=r1}category,manufacturer
 facet.range={tag=r1}price
 {noformat}
 ...with the request above, in addition to computing range facets over the 
 price field for the entire result set, the PivotFacet component will also 
 include all of those ranges for every node of the tree it builds up when 
 generating a pivot of the fields category,manufacturer
 This should easily be combinable with the other sibling tasks to hang stats 
 off ranges which hang off pivots. (see parent issue for example)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7748) Fix bin/solr to work on IBM J9

2015-07-08 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved SOLR-7748.
--
Resolution: Fixed

Committed to trunk and 5x. Thanks [~elyograg] for the review!

 Fix bin/solr to work on IBM J9
 --

 Key: SOLR-7748
 URL: https://issues.apache.org/jira/browse/SOLR-7748
 Project: Solr
  Issue Type: Bug
  Components: Server
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 5.3, Trunk

 Attachments: SOLR-7748.patch, SOLR-7748.patch, SOLR-7748.patch, 
 solr-7748.patch


 bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 
 supports -Xverbosegclog. This prevents using bin/solr to start it on J9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7748) Fix bin/solr to work on IBM J9

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618472#comment-14618472
 ] 

ASF subversion and git services commented on SOLR-7748:
---

Commit 1689849 from [~shaie] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689849 ]

SOLR-7748: Fix bin/solr to work on IBM J9

 Fix bin/solr to work on IBM J9
 --

 Key: SOLR-7748
 URL: https://issues.apache.org/jira/browse/SOLR-7748
 Project: Solr
  Issue Type: Bug
  Components: Server
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 5.3, Trunk

 Attachments: SOLR-7748.patch, SOLR-7748.patch, SOLR-7748.patch, 
 solr-7748.patch


 bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 
 supports -Xverbosegclog. This prevents using bin/solr to start it on J9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4212) Let facet queries hang off of pivots

2015-07-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-4212.
-
Resolution: Fixed

Thanks Steve and Hoss!

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: 5.3, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, patch-4212.txt


  Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4212) Let facet queries hang off of pivots

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618413#comment-14618413
 ] 

ASF subversion and git services commented on SOLR-4212:
---

Commit 1689839 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689839 ]

SOLR-4212: SOLR-6353: Let facet queries and facet ranges hang off of pivots

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: 5.3, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, patch-4212.txt


  Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6353) Let Range Facets Hang off of Pivots

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618414#comment-14618414
 ] 

ASF subversion and git services commented on SOLR-6353:
---

Commit 1689839 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689839 ]

SOLR-4212: SOLR-6353: Let facet queries and facet ranges hang off of pivots

 Let Range Facets Hang off of Pivots
 ---

 Key: SOLR-6353
 URL: https://issues.apache.org/jira/browse/SOLR-6353
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Shalin Shekhar Mangar

 Conceptually very similar to the previous sibling issues about hanging stats 
 of pivots  ranges: using a tag on {{facet.range}} requests, we make it 
 possible to hang a range off the nodes of Pivots.
 Example...
 {noformat}
 facet.pivot={!range=r1}category,manufacturer
 facet.range={tag=r1}price
 {noformat}
 ...with the request above, in addition to computing range facets over the 
 price field for the entire result set, the PivotFacet component will also 
 include all of those ranges for every node of the tree it builds up when 
 generating a pivot of the fields category,manufacturer
 This should easily be combinable with the other sibling tasks to hang stats 
 off ranges which hang off pivots. (see parent issue for example)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6353) Let Range Facets Hang off of Pivots

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618418#comment-14618418
 ] 

ASF subversion and git services commented on SOLR-6353:
---

Commit 1689840 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1689840 ]

SOLR-4212: SOLR-6353: Added attribution in changes.txt

 Let Range Facets Hang off of Pivots
 ---

 Key: SOLR-6353
 URL: https://issues.apache.org/jira/browse/SOLR-6353
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Shalin Shekhar Mangar

 Conceptually very similar to the previous sibling issues about hanging stats 
 of pivots  ranges: using a tag on {{facet.range}} requests, we make it 
 possible to hang a range off the nodes of Pivots.
 Example...
 {noformat}
 facet.pivot={!range=r1}category,manufacturer
 facet.range={tag=r1}price
 {noformat}
 ...with the request above, in addition to computing range facets over the 
 price field for the entire result set, the PivotFacet component will also 
 include all of those ranges for every node of the tree it builds up when 
 generating a pivot of the fields category,manufacturer
 This should easily be combinable with the other sibling tasks to hang stats 
 off ranges which hang off pivots. (see parent issue for example)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4212) Let facet queries hang off of pivots

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618417#comment-14618417
 ] 

ASF subversion and git services commented on SOLR-4212:
---

Commit 1689840 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1689840 ]

SOLR-4212: SOLR-6353: Added attribution in changes.txt

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: 5.3, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, patch-4212.txt


  Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4212) Let facet queries hang off of pivots

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618419#comment-14618419
 ] 

ASF subversion and git services commented on SOLR-4212:
---

Commit 1689841 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689841 ]

SOLR-4212: SOLR-6353: Added attribution in changes.txt

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: 5.3, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, patch-4212.txt


  Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7748) Fix bin/solr to work on IBM J9

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618462#comment-14618462
 ] 

ASF subversion and git services commented on SOLR-7748:
---

Commit 1689847 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1689847 ]

SOLR-7748: Fix bin/solr to work on IBM J9

 Fix bin/solr to work on IBM J9
 --

 Key: SOLR-7748
 URL: https://issues.apache.org/jira/browse/SOLR-7748
 Project: Solr
  Issue Type: Bug
  Components: Server
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 5.3, Trunk

 Attachments: SOLR-7748.patch, SOLR-7748.patch, SOLR-7748.patch, 
 solr-7748.patch


 bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 
 supports -Xverbosegclog. This prevents using bin/solr to start it on J9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by filters

2015-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618312#comment-14618312
 ] 

Uwe Schindler commented on LUCENE-6667:
---

I have not looked at SynonymFilter, but maybe there is a bug. In general the 
above is how all filters should call. Maybe we should somehow add some 
assertions that Filters never call clearAttributes(), but this is hard because 
of shared state between filters and root.

 Custom attributes get cleared by filters
 

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_45) - Build # 5004 - Failure!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5004/
Java: 32bit/jdk1.8.0_45 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([9C0780DE0DBCBE19:26D5EFA68E92500C]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:770)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:319)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: ?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint 
name=QTime0/int/lstresult name=response numFound=0 
start=0/result
/response

request was:q=id:529qt=standardstart=0rows=20version=2.2
at 

[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by filters

2015-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618309#comment-14618309
 ] 

Uwe Schindler commented on LUCENE-6667:
---

In filters the approach should be the following:
- Of the original token capture the state
- when you insert new tokens, restore the state instead of clearAttributes()
- set the changed attributes

This approach is used by stemmers that insert stemmed tokens (preserve 
original), so the original attributes keep alive.

clearAttributes should only be called in Tokenizers or root TokenStreams.

 Custom attributes get cleared by filters
 

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6645) BKD tree queries should use BitDocIdSet.Builder

2015-07-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6645:
-
Attachment: LUCENE-6645.patch

Here is a fixed patch.

 BKD tree queries should use BitDocIdSet.Builder
 ---

 Key: LUCENE-6645
 URL: https://issues.apache.org/jira/browse/LUCENE-6645
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Attachments: LUCENE-6645.patch, LUCENE-6645.patch, LUCENE-6645.patch, 
 LUCENE-6645.patch


 When I was iterating on BKD tree originally I remember trying to use this 
 builder (which makes a sparse bit set at first and then upgrades to dense if 
 enough bits get set) and being disappointed with its performance.
 I wound up just making a FixedBitSet every time, but this is obviously 
 wasteful for small queries.
 It could be the perf was poor because I was always .or'ing in DISIs that had 
 512 - 1024 hits each time (the size of each leaf cell in the BKD tree)?  I 
 also had to make my own DISI wrapper around each leaf cell... maybe that was 
 the source of the slowness, not sure.
 I also sort of wondered whether the SmallDocSet in spatial module (backed by 
 a SentinelIntSet) might be faster ... though it'd need to be sorted in the 
 and after building before returning to Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6666) Thread pool's threads may escape in TestCodecLoadingDeadlock

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618160#comment-14618160
 ] 

ASF subversion and git services commented on LUCENE-:
-

Commit 1689803 from [~dawidweiss] in branch 'dev/trunk'
[ https://svn.apache.org/r1689803 ]

LUCENE-: (damn, how evil!) Thread pool's threads may escape in 
TestCodecLoadingDeadlock

 Thread pool's threads may escape in TestCodecLoadingDeadlock
 

 Key: LUCENE-
 URL: https://issues.apache.org/jira/browse/LUCENE-
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 5.3, Trunk, 6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6667) Custom attributes get cleared by filters

2015-07-08 Thread Oliver Becker (JIRA)
Oliver Becker created LUCENE-6667:
-

 Summary: Custom attributes get cleared by filters
 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker


I believe the Lucene API enables users to define their custom attributes (by 
extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear and 
restore the state of this custom attribute.

However, some filters (in our case the SynonymFilter) simply call 
{{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
filter just resets some known attributes, simply ignoring all other custom 
attributes. In the end our custom attribute value is lost.

Is this a bug in {{SynonymFilter}} (and others) or are we using the API in the 
wrong way?

A solution might be of course to provide empty implementations of {{clear}} and 
{{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6666) Thread pool's threads may escape in TestCodecLoadingDeadlock

2015-07-08 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-:
---

 Summary: Thread pool's threads may escape in 
TestCodecLoadingDeadlock
 Key: LUCENE-
 URL: https://issues.apache.org/jira/browse/LUCENE-
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 5.3, Trunk, 6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6353) Let Range Facets Hang off of Pivots

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618158#comment-14618158
 ] 

ASF subversion and git services commented on SOLR-6353:
---

Commit 1689802 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1689802 ]

SOLR-4212: SOLR-6353: Let facet queries and facet ranges hang off of pivots

 Let Range Facets Hang off of Pivots
 ---

 Key: SOLR-6353
 URL: https://issues.apache.org/jira/browse/SOLR-6353
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Shalin Shekhar Mangar

 Conceptually very similar to the previous sibling issues about hanging stats 
 of pivots  ranges: using a tag on {{facet.range}} requests, we make it 
 possible to hang a range off the nodes of Pivots.
 Example...
 {noformat}
 facet.pivot={!range=r1}category,manufacturer
 facet.range={tag=r1}price
 {noformat}
 ...with the request above, in addition to computing range facets over the 
 price field for the entire result set, the PivotFacet component will also 
 include all of those ranges for every node of the tree it builds up when 
 generating a pivot of the fields category,manufacturer
 This should easily be combinable with the other sibling tasks to hang stats 
 off ranges which hang off pivots. (see parent issue for example)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6660) Assertion fails for ToParentBlockJoinQuery$BlockJoinScorer.nextDoc

2015-07-08 Thread Christian Danninger (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618156#comment-14618156
 ] 

Christian Danninger commented on LUCENE-6660:
-

You are definitly right. Thanks a lot. 
Maybe you can modify the error message a bit to be more precise and to cover 
these case too.
I will make a comment here: 
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-BlockJoinQueryParsers

Btw. no intersection for the boolean query above.


 Assertion fails for ToParentBlockJoinQuery$BlockJoinScorer.nextDoc
 --

 Key: LUCENE-6660
 URL: https://issues.apache.org/jira/browse/LUCENE-6660
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 5.2.1
 Environment: Running Solr 5.2.1 on Windows x64
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) Client VM (build 24.51-b03, mixed mode, sharing)
Reporter: Christian Danninger
 Attachments: index.zip


 After I enable assertion with -ea:org.apache... I got the stack trace 
 below. I checked that the parent filter only match parent documents and the 
 child filter only match child documents. Field id is unique.
 16:55:06,269 ERROR [org.apache.solr.servlet.SolrDispatchFilter] 
 (http-127.0.0.1/127.0.0.1:8080-1) null:java.lang.RuntimeException: 
 java.lang.AssertionError
   at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
   at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246)
   at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
   at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230)
   at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:149)
   at 
 org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:169)
   at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:145)
   at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:97)
   at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:559)
   at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:102)
   at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:336)
   at 
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
   at 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:653)
   at 
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:920)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.AssertionError
   at 
 org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:278)
   at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:204)
   at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:176)
   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:771)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:485)
   at 
 org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:202)
   at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1666)
   at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1485)
   at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:561)
   at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
   ... 16 more
 Without assertions enabled:
 17:21:39,008 ERROR [org.apache.solr.servlet.SolrDispatchFilter] 
 (http-127.0.0.1/127.0.0.1:8080-1) null:java.lang.IllegalStateException: child 
 query must only match non-parent docs, but parent docID=2147483647 matched 
 childScorer=class 

[jira] [Commented] (SOLR-4212) Let facet queries hang off of pivots

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618157#comment-14618157
 ] 

ASF subversion and git services commented on SOLR-4212:
---

Commit 1689802 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1689802 ]

SOLR-4212: SOLR-6353: Let facet queries and facet ranges hang off of pivots

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: 5.3, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-4212.patch, SOLR-6353-4212.patch, 
 SOLR-6353-4212.patch, SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, patch-4212.txt


  Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6666) Thread pool's threads may escape in TestCodecLoadingDeadlock

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618238#comment-14618238
 ] 

ASF subversion and git services commented on LUCENE-:
-

Commit 1689819 from [~dawidweiss] in branch 'dev/trunk'
[ https://svn.apache.org/r1689819 ]

LUCENE-: reverting this from trunk as the code doesn't use a separate 
thread killer thread (thanks Uwe).

 Thread pool's threads may escape in TestCodecLoadingDeadlock
 

 Key: LUCENE-
 URL: https://issues.apache.org/jira/browse/LUCENE-
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 5.3, Trunk, 6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by SynonymFilter

2015-07-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618574#comment-14618574
 ] 

Robert Muir commented on LUCENE-6667:
-

you never know what the attribute is doing. this might be inappropriate.

I don't think we should change it for synonymfilter in an inconsistent way, 
because it will be confusing, and then be confusing for other tokenfilters too.


 Custom attributes get cleared by SynonymFilter
 --

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by SynonymFilter

2015-07-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618542#comment-14618542
 ] 

Michael McCandless commented on LUCENE-6667:


Maybe we could special case the single-token case to always clone the attrs 
from the token it matched?

 Custom attributes get cleared by SynonymFilter
 --

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13198/
Java: 64bit/jdk1.9.0-ea-b71 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testArabicPDF

Error Message:
Invalid Date String:'Tue Mar 09 08:44:49 EEST 2010'

Stack Trace:
org.apache.solr.common.SolrException: Invalid Date String:'Tue Mar 09 08:44:49 
EEST 2010'
at 
__randomizedtesting.SeedInfo.seed([F05E63CDB3511062:9E9818C2B094BB37]:0)
at 
org.apache.solr.schema.TrieDateField.parseMath(TrieDateField.java:150)
at org.apache.solr.schema.TrieField.createField(TrieField.java:657)
at org.apache.solr.schema.TrieField.createFields(TrieField.java:694)
at 
org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:981)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(ExtractingDocumentLoader.java:122)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.addDoc(ExtractingDocumentLoader.java:127)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:230)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2058)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:339)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocalFromHandler(ExtractingRequestHandlerTest.java:737)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocal(ExtractingRequestHandlerTest.java:744)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testArabicPDF(ExtractingRequestHandlerTest.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)

[GitHub] lucene-solr pull request: Absent docValues represented by NONE, no...

2015-07-08 Thread grossws
GitHub user grossws opened a pull request:

https://github.com/apache/lucene-solr/pull/184

Absent docValues represented by NONE, not null

`docValuesType()` returns non-null in `solr.schema.FieldType`,
this is protected by setter in `FieldType`, so invalid docValues
configuration in `FieldType#createFields()` should checked by
`field.hasDocValues()  f.fieldType().docValuesType() == NONE`
instead of `== null`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/grossws/lucene-solr fix-dv-unsupported

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #184


commit eb41b8a7f6c9b6ae00203a2f59552d0625dd9c04
Author: Konstantin Gribov gros...@gmail.com
Date:   2015-07-08T12:54:33Z

Absent docValues represented by NONE, not null

`docValuesType()` returns non-null in `solr.schema.FieldType`,
this is protected by setter in `FieldType`, so invalid docValues
configuration in `FieldType#createFields()` should checked by
`field.hasDocValues()  f.fieldType().docValuesType() == NONE`
instead of `== null`.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs

2015-07-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7692:
-
Attachment: SOLR-7692.patch

added APIs to edit BasicAuth/ZkBasedAuthorization

 Implement BasicAuth based impl for the new Authentication/Authorization APIs
 

 Key: SOLR-7692
 URL: https://issues.apache.org/jira/browse/SOLR-7692
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch


 This involves various components
 h2. Authentication
 A basic auth based authentication filter. This should retrieve the user 
 credentials from ZK.  The user name and sha1 hash of password should be 
 stored in ZK
 sample authentication json 
 {code:javascript}
 {
   authentication:{
 class: solr.BasicAuthPlugin,
 users :{
   john :09fljnklnoiuy98 buygujkjnlk,
   david:f678njfgfjnklno iuy9865ty,
   pete: 87ykjnklndfhjh8 98uyiy98,
}
   }
 }
 {code}
 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
roles :{
   admin : [john]
   guest : [john, david,pete]
}
 permissions: {
collection-edit: {
  role: admin 
},
coreadmin:{
  role:admin
},
config-edit: {
  //all collections
  role: admin,
  method:POST
},
schema-edit: {
  roles: admin,
  method:POST
},
update: {
  //all collections
  role: dev
},
   mycoll_update: {
 collection: mycoll,
 path:[/update/*],
 role: [somebody]
   }
 }
   }
 }
 {code} 
 We will also need to provide APIs to create users and assign them roles



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6595) CharFilter offsets correction is wonky

2015-07-08 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated LUCENE-6595:
-
Attachment: LUCENE-6595.patch

Refactored some code inside BaseCharFilter to make it cleaner. I think this 
patch is final.

[~mikemccand] I changed 
{code}
addOffCorrectMap(off, cumulativeDiff, 0);
{code}
to
{code}
addOffCorrectMap(off, cumulativeDiff, off);
{code}
But it fail with some test of HTMLStripCharFilterTest. I'm not sure what going 
on HTMLStripCharFilter.


 CharFilter offsets correction is wonky
 --

 Key: LUCENE-6595
 URL: https://issues.apache.org/jira/browse/LUCENE-6595
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: LUCENE-6595.patch, LUCENE-6595.patch, LUCENE-6595.patch


 Spinoff from this original Elasticsearch issue: 
 https://github.com/elastic/elasticsearch/issues/11726
 If I make a MappingCharFilter with these mappings:
 {noformat}
   ( - 
   ) - 
 {noformat}
 i.e., just erase left and right paren, then tokenizing the string
 (F31) with e.g. WhitespaceTokenizer, produces a single token F31,
 with start offset 1 (good).
 But for its end offset, I would expect/want 4, but it produces 5
 today.
 This can be easily explained given how the mapping works: each time a
 mapping rule matches, we update the cumulative offset difference,
 conceptually as an array like this (it's encoded more compactly):
 {noformat}
   Output offset: 0 1 2 3
Input offset: 1 2 3 5
 {noformat}
 When the tokenizer produces F31, it assigns it startOffset=0 and
 endOffset=3 based on the characters it sees (F, 3, 1).  It then asks
 the CharFilter to correct those offsets, mapping them backwards
 through the above arrays, which creates startOffset=1 (good) and
 endOffset=5 (bad).
 At first, to fix this, I thought this is an off-by-1 and when
 correcting the endOffset we really should return
 1+correct(outputEndOffset-1), which would return the correct value (4)
 here.
 But that's too naive, e.g. here's another example:
 {noformat}
    - cc
 {noformat}
 If I then tokenize , today we produce the correct offsets (0, 4)
 but if we do this off-by-1 fix for endOffset, we would get the wrong
 endOffset (2).
 I'm not sure what to do here...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [CI] Lucene 5x Linux 64 Test Only - Build # 54928 - Failure!

2015-07-08 Thread Michael McCandless
I tried to beast with Oracle JDK 1.8.0_45 but couldn't repro ... maybe this
is IBM J9 specific?

Mike McCandless

On Wed, Jul 8, 2015 at 10:59 AM, bu...@elastic.co wrote:

   *BUILD FAILURE*
 Build URL
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/
 Project:lucene_linux_java8_64_test_only Randomization: 
 JDKIBM8,network,heap[1024m],-server
 +UseParallelGC +UseCompressedOops,sec manager on Date of build:Wed, 08
 Jul 2015 16:56:37 +0200 Build duration:2 min 35 sec
  *CHANGES* No Changes
  *BUILD ARTIFACTS*
 checkout/lucene/build/core/test/temp/junit4-J0-20150708_165702_515.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J0-20150708_165702_515.events
 checkout/lucene/build/core/test/temp/junit4-J1-20150708_165702_515.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J1-20150708_165702_515.events
 checkout/lucene/build/core/test/temp/junit4-J2-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J2-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J3-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J3-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J4-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J4-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J5-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J5-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J6-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J6-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J7-20150708_165702_515.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J7-20150708_165702_515.events
  *FAILED JUNIT TESTS* Name: org.apache.lucene.index Failed: 1 test(s),
 Passed: 786 test(s), Skipped: 25 test(s), Total: 812 test(s)
 *Failed: org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics *
  *CONSOLE OUTPUT* [...truncated 2181 lines...] [junit4] Tests with
 failures: [junit4] -
 org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics [junit4]
 [junit4]  [junit4] JVM J0: 1.17 .. 128.68 = 127.51s [junit4] JVM J1: 1.16
 .. 128.17 = 127.00s [junit4] JVM J2: 0.94 .. 129.06 = 128.12s [junit4]
 JVM J3: 1.43 .. 128.52 = 127.09s [junit4] JVM J4: 1.17 .. 130.66 = 129.49s 
 [junit4]
 JVM J5: 1.43 .. 128.41 = 126.98s [junit4] JVM J6: 0.93 .. 130.79 = 129.86s 
 [junit4]
 JVM J7: 1.18 .. 128.83 = 127.65s [junit4] Execution time total: 2 minutes
 10 seconds [junit4] Tests summary: 405 suites, 3275 tests, 1 error, 53
 ignored (49 assumptions) BUILD FAILED 
 /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:50:
 The following error occurred while executing this line: 
 /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1444:
 The following error occurred while executing this line: 
 /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:999:
 There were test failures: 405 suites, 3275 tests, 1 error, 53 ignored (49
 assumptions) Total time: 2 minutes 22 seconds Build step 'Invoke Ant'
 marked build as failure Archiving artifacts Recording test results 
 [description-setter]
 Description set: JDKIBM8,network,heap[1024m],-server +UseParallelGC
 +UseCompressedOops,sec manager on Email was triggered for: Failure - 1st 
 Trigger
 Failure - Any was overridden by another trigger and will not send an email. 
 Trigger
 Failure - Still was overridden by another trigger and will not send an
 email. Sending email for trigger: Failure - 1st



[jira] [Updated] (SOLR-7764) Solr indexing hangs if encounters an certain XML parse error

2015-07-08 Thread Sorin Gheorghiu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorin Gheorghiu updated SOLR-7764:
--
Attachment: Solr_XML_parse_error_080715.txt

Errors stack trace attached

 Solr indexing hangs if encounters an certain XML parse error
 

 Key: SOLR-7764
 URL: https://issues.apache.org/jira/browse/SOLR-7764
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.7.2
 Environment: Ubuntu 12.04.5 LTS
Reporter: Sorin Gheorghiu
  Labels: indexing
 Attachments: Solr_XML_parse_error_080715.txt


 BlueSpice (http://bluespice.com/) uses Solr to index documents for the 
 'Extended search' feature.
 Solr hangs if during indexing certain error occurs:
 8.7.2015 15:34:26
 ERROR
 SolrCore
 org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error
 8.7.2015 15:34:26
 ERROR
 SolrDispatchFilter
 null:org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b71) - Build # 13375 - Failure!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13375/
Java: 32bit/jdk1.9.0-ea-b71 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=3259, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=3260, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=3258, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:508) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)4) Thread[id=3261, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=3262, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=3259, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

[jira] [Created] (SOLR-7764) Solr indexing hangs if encounters an certain XML parse error

2015-07-08 Thread Sorin Gheorghiu (JIRA)
Sorin Gheorghiu created SOLR-7764:
-

 Summary: Solr indexing hangs if encounters an certain XML parse 
error
 Key: SOLR-7764
 URL: https://issues.apache.org/jira/browse/SOLR-7764
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.7.2
 Environment: Ubuntu 12.04.5 LTS
Reporter: Sorin Gheorghiu


BlueSpice (http://bluespice.com/) uses Solr to index documents for the 
'Extended search' feature.

Solr hangs if during indexing certain error occurs:

8.7.2015 15:34:26
ERROR
SolrCore
org.apache.solr.common.SolrException: org.apache.tika.exception.TikaException: 
XML parse error

8.7.2015 15:34:26
ERROR
SolrDispatchFilter
null:org.apache.solr.common.SolrException: 
org.apache.tika.exception.TikaException: XML parse error




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13358 - Failure!

2015-07-08 Thread Uwe Schindler
Hi Rory, hi Roland,

 

the first build with Java 9 build 71 passed the array copy tests. I will also 
inform you if the problems Dawid mentioned about tests and javac failing occur 
again!

 

Uwe

 

-

Uwe Schindler

uschind...@apache.org 

ASF Member, Apache Lucene PMC / Committer

Bremen, Germany

http://lucene.apache.org/

 

From: Uwe Schindler [mailto:uschind...@apache.org] 
Sent: Wednesday, July 08, 2015 12:36 PM
To: 'Rory O'Donnell'; dev@lucene.apache.org; 'Roland Westrelin'
Cc: 'Dalibor Topic'; 'Balchandra Vaidya'; 'Vivek Theeyarath'
Subject: RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build 
# 13358 - Failure!

 

Hi,

 

I gave b71 a try. It’s now in the build chain, we will see how it behaves.

 

Uwe

 

-

Uwe Schindler

uschind...@apache.org 

ASF Member, Apache Lucene PMC / Committer

Bremen, Germany

http://lucene.apache.org/

 

From: Rory O'Donnell [mailto:rory.odonn...@oracle.com] 
Sent: Wednesday, July 08, 2015 11:04 AM
To: dev@lucene.apache.org; Roland Westrelin
Cc: rory.odonn...@oracle.com; Uwe Schindler; Dalibor Topic; Balchandra Vaidya; 
Vivek Theeyarath
Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build 
# 13358 - Failure!

 

Hi Uwe,Dawid, 

I should have mentioned a fix for JDK-8086046 is also included in b71.

Rgds,Rory

On 07/07/2015 11:51, Rory O'Donnell wrote:

Hi Uwe,Dawid, 

b71 is now available on java.net 

Rgds,Rory 

On 07/07/2015 11:35, Dawid Weiss wrote: 

Hi Roland, 

It's Uwe (Schindler's) jenkins so we'll keep an eye on this when he 
installs the new build and let you know, thanks! 

Dawid 


On Tue, Jul 7, 2015 at 12:08 PM, Roland Westrelin 
 mailto:roland.westre...@oracle.com roland.westre...@oracle.com wrote: 

Hi Dawid, 



Another failure related to this issue: 
https://bugs.openjdk.java.net/browse/JDK-8129831 

This looks like a duplicate of: 

https://bugs.openjdk.java.net/browse/JDK-8086016 

(that you can’t see). It was integrated in b71. Do you know if your issue is 
reproduceable with b71? 

Roland. 

- 
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
For additional commands, e-mail: dev-h...@lucene.apache.org 

 

 

-- 
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland 


Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13358 - Failure!

2015-07-08 Thread Rory O'Donnell

Thanks for that Uwe!

On 08/07/2015 16:21, Uwe Schindler wrote:


Hi Rory, hi Roland,

the first build with Java 9 build 71 passed the array copy tests. I 
will also inform you if the problems Dawid mentioned about tests and 
javac failing occur again!


Uwe

-

Uwe Schindler

uschind...@apache.org

ASF Member, Apache Lucene PMC / Committer

Bremen, Germany

http://lucene.apache.org/

*From:*Uwe Schindler [mailto:uschind...@apache.org]
*Sent:* Wednesday, July 08, 2015 12:36 PM
*To:* 'Rory O'Donnell'; dev@lucene.apache.org; 'Roland Westrelin'
*Cc:* 'Dalibor Topic'; 'Balchandra Vaidya'; 'Vivek Theeyarath'
*Subject:* RE: [JENKINS] Lucene-Solr-trunk-Linux 
(64bit/jdk1.9.0-ea-b60) - Build # 13358 - Failure!


Hi,

I gave b71 a try. It’s now in the build chain, we will see how it behaves.

Uwe

-

Uwe Schindler

uschind...@apache.org mailto:uschind...@apache.org

ASF Member, Apache Lucene PMC / Committer

Bremen, Germany

http://lucene.apache.org/

*From:*Rory O'Donnell [mailto:rory.odonn...@oracle.com]
*Sent:* Wednesday, July 08, 2015 11:04 AM
*To:* dev@lucene.apache.org mailto:dev@lucene.apache.org; Roland 
Westrelin
*Cc:* rory.odonn...@oracle.com mailto:rory.odonn...@oracle.com; Uwe 
Schindler; Dalibor Topic; Balchandra Vaidya; Vivek Theeyarath
*Subject:* Re: [JENKINS] Lucene-Solr-trunk-Linux 
(64bit/jdk1.9.0-ea-b60) - Build # 13358 - Failure!


Hi Uwe,Dawid,

I should have mentioned a fix for JDK-8086046 is also included in b71.

Rgds,Rory

On 07/07/2015 11:51, Rory O'Donnell wrote:

Hi Uwe,Dawid,

b71 is now available on java.net

Rgds,Rory

On 07/07/2015 11:35, Dawid Weiss wrote:

Hi Roland,

It's Uwe (Schindler's) jenkins so we'll keep an eye on this when he
installs the new build and let you know, thanks!

Dawid


On Tue, Jul 7, 2015 at 12:08 PM, Roland Westrelin
roland.westre...@oracle.com mailto:roland.westre...@oracle.com
wrote:

Hi Dawid,

Another failure related to this issue:
https://bugs.openjdk.java.net/browse/JDK-8129831

This looks like a duplicate of:

https://bugs.openjdk.java.net/browse/JDK-8086016

(that you can’t see). It was integrated in b71. Do you know if
your issue is reproduceable with b71?

Roland.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
mailto:dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
mailto:dev-h...@lucene.apache.org

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Commented] (LUCENE-6616) IndexWriter should list files once on init, and IFD should not suppress FNFE

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618721#comment-14618721
 ] 

ASF subversion and git services commented on LUCENE-6616:
-

Commit 1689893 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1689893 ]

LUCENE-6616: IW lists files only once on init, IFD no longer suppresses FNFE, 
IFD deletes segments_N files last

 IndexWriter should list files once on init, and IFD should not suppress FNFE
 

 Key: LUCENE-6616
 URL: https://issues.apache.org/jira/browse/LUCENE-6616
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.3, Trunk

 Attachments: LUCENE-6616.patch, LUCENE-6616.patch, LUCENE-6616.patch


 Some nice ideas [~rcmuir] had for cleaning up IW/IFD on init ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3313 - Failure

2015-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3313/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:55514/_rue/uc/collection1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:55514/_rue/uc/collection1]
at 
__randomizedtesting.SeedInfo.seed([1E5A3B7F3095885A:960E04A59E69E5A2]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFullDistribZkTestBase.java:1376)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:102)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:74)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Commented] (SOLR-7764) Solr indexing hangs if encounters an certain XML parse error

2015-07-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618778#comment-14618778
 ] 

Erick Erickson commented on SOLR-7764:
--

Does this really hang or is it just that the doc doesn't get indexed? IOW, can 
you index other documents?

This looks at first glance like the file you're sending is simply corrupt or in 
some unexpected character set. But that's a guess.

BTW, it's usually best to raise these kinds of issues on the user's list first 
and then raise a JIRA if it's a confirmed problem, unless you're quite sure 
it's a code issue.

 Solr indexing hangs if encounters an certain XML parse error
 

 Key: SOLR-7764
 URL: https://issues.apache.org/jira/browse/SOLR-7764
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.7.2
 Environment: Ubuntu 12.04.5 LTS
Reporter: Sorin Gheorghiu
  Labels: indexing
 Attachments: Solr_XML_parse_error_080715.txt


 BlueSpice (http://bluespice.com/) uses Solr to index documents for the 
 'Extended search' feature.
 Solr hangs if during indexing certain error occurs:
 8.7.2015 15:34:26
 ERROR
 SolrCore
 org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error
 8.7.2015 15:34:26
 ERROR
 SolrDispatchFilter
 null:org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6629) Random 7200 seconds build timeouts / infinite loops in Lucene tests?

2015-07-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-6629:
-
Attachment: 54457_consoleText.txt

Another one (July 5th): 
http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54457/
Build Description: JDKEA9,network,heap[571m],-server +UseSerialGC 
+UseCompressedOops +AggressiveOpts,assert off,sec manager on
I'm attaching the console output for posterity.

 Random 7200 seconds build timeouts / infinite loops in Lucene tests?
 

 Key: LUCENE-6629
 URL: https://issues.apache.org/jira/browse/LUCENE-6629
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: 54457_consoleText.txt


 I'm not sure what's going on here, but every so often a Jenkins run will fail 
 with a build timeout (7200 seconds) with stack traces that do not look like 
 deadlock.  They never reproduce for me.
 We really need to improve test infra here, so that each HEARTBEAT we also got 
 1) full thread stacks and 2) total CPU usage of the process as reported by 
 the ManagementBean APIs ... this would shed more light on whether the JVM is 
 somehow hung vs our bug (infinite loop).  But infinite loop seems most likely 
 ... the stacks always seem to be somewhere spooky.
 We should try to gather recent Jenkins runs where this is happening, here, to 
 see if there are patterns / we can get to the root cause.
 Anyway, this happened to me on my old beast box, which runs the nightly ant 
 test times graphs: 
 http://people.apache.org/~mikemccand/lucenebench/antcleantest.html
 The 2015/06/27 data point is missing because it failed with this timeout:
 {noformat}
[junit4] Suite: org.apache.lucene.search.TestDocValuesRewriteMethod
[junit4]   2 ??? 28, 2015 7:01:29 ? 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
[junit4]   2 WARNING: Suite execution timed out: 
 org.apache.lucene.search.TestDocValuesRewriteMethod
[junit4]   21) Thread[id=1, name=main, state=WAITING, group=main]
[junit4]   2 at java.lang.Object.wait(Native Method)
[junit4]   2 at java.lang.Thread.join(Thread.java:1245)
[junit4]   2 at java.lang.Thread.join(Thread.java:1319)
[junit4]   2 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:578)
[junit4]   2 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:444)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:199)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:310)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
[junit4]   22) Thread[id=213, 
 name=TEST-TestDocValuesRewriteMethod.testRegexps-seed#[C2DDF486BB909D8], 
 state=RUNNABLE, group=TGRP-TestDocValuesRewriteMethod]
[junit4]   2 at 
 org.apache.lucene.util.automaton.Operations.getLiveStates(Operations.java:900)
[junit4]   2 at 
 org.apache.lucene.util.automaton.Operations.hasDeadStates(Operations.java:389)
[junit4]   2 at 
 org.apache.lucene.util.automaton.Automata.makeString(Automata.java:517)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:579)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:519)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:510)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:495)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:466)
[junit4]   2 at 
 org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:109)
[junit4]   2 at 
 org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
[junit4]   2 at 
 org.apache.lucene.search.TestDocValuesRewriteMethod.assertSame(TestDocValuesRewriteMethod.java:117)
[junit4]   2 at 
 org.apache.lucene.search.TestDocValuesRewriteMethod.testRegexps(TestDocValuesRewriteMethod.java:109)
[junit4]   2 at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit4]   2 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[junit4]   2 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2 at 

[jira] [Commented] (SOLR-5756) A utility API to move collections from internal to external

2015-07-08 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618875#comment-14618875
 ] 

Scott Blum commented on SOLR-5756:
--

Hi, has any work started on this issue?  We have a deployment with a very large 
clusterstate.json (most of our collections are there).  New collections added 
since our last upgrade have their own split state.json, but we still have an 
enormous number of collections using the shared file.  We are suspicious that 
the large degree of contention on clusterstate.json is affecting the stability 
of our cluster, so we'd like to split it apart to see if things improve.

A few questions:

1) Do you think it would be safe to do this manually on a running cluster?  
I've only spent a few hours looking at the overseer code, but I got the 
impression that I might just be able to populate all the state.json nodes 
manually, followed by emptying clusterstate.json.  That last step should tickle 
all the running servers, forcing a reload which will get all servers into the 
right separated state.  At least, that's my theory.  Does that sound right to 
you?

2) Suppose I wanted to try to write a patch for this issue to help solve it for 
everyone, is that a reasonable thing to attempt for someone with a lot of ZK 
knowledge but pretty new to Solr?  Or are there a lot of subtleties?

3) Can you opine on the specifics of having an API to move the state out vs. a 
forced migration?  From what I read on SOLR-5473, it sounds like eventually 
we'd just want to force everyone into split state.  Is it too soon to do that?

(Unrelated to this specific issue, I'm actually a committer on Apache Curator, 
and I have a general interest in understanding and possibly helping improve 
overseer's ZK interactions.   Are there any docs outside of the code itself you 
might recommend for me to read?)


 A utility API to move collections from internal to external
 ---

 Key: SOLR-5756
 URL: https://issues.apache.org/jira/browse/SOLR-5756
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 SOLR-5473 allows creation of collection with state stored outside of 
 clusterstate.json. We would need an API to  move existing 'internal' 
 collections outside



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6629) Random 7200 seconds build timeouts / infinite loops in Lucene tests?

2015-07-08 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618825#comment-14618825
 ] 

Dawid Weiss commented on LUCENE-6629:
-

Can we have the 1.9ea JVM upgraded to b71 on that machine? Maybe it's a JVM 
issue that's been solved already.

 Random 7200 seconds build timeouts / infinite loops in Lucene tests?
 

 Key: LUCENE-6629
 URL: https://issues.apache.org/jira/browse/LUCENE-6629
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: 54457_consoleText.txt


 I'm not sure what's going on here, but every so often a Jenkins run will fail 
 with a build timeout (7200 seconds) with stack traces that do not look like 
 deadlock.  They never reproduce for me.
 We really need to improve test infra here, so that each HEARTBEAT we also got 
 1) full thread stacks and 2) total CPU usage of the process as reported by 
 the ManagementBean APIs ... this would shed more light on whether the JVM is 
 somehow hung vs our bug (infinite loop).  But infinite loop seems most likely 
 ... the stacks always seem to be somewhere spooky.
 We should try to gather recent Jenkins runs where this is happening, here, to 
 see if there are patterns / we can get to the root cause.
 Anyway, this happened to me on my old beast box, which runs the nightly ant 
 test times graphs: 
 http://people.apache.org/~mikemccand/lucenebench/antcleantest.html
 The 2015/06/27 data point is missing because it failed with this timeout:
 {noformat}
[junit4] Suite: org.apache.lucene.search.TestDocValuesRewriteMethod
[junit4]   2 ??? 28, 2015 7:01:29 ? 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
[junit4]   2 WARNING: Suite execution timed out: 
 org.apache.lucene.search.TestDocValuesRewriteMethod
[junit4]   21) Thread[id=1, name=main, state=WAITING, group=main]
[junit4]   2 at java.lang.Object.wait(Native Method)
[junit4]   2 at java.lang.Thread.join(Thread.java:1245)
[junit4]   2 at java.lang.Thread.join(Thread.java:1319)
[junit4]   2 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:578)
[junit4]   2 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:444)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:199)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:310)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
[junit4]   22) Thread[id=213, 
 name=TEST-TestDocValuesRewriteMethod.testRegexps-seed#[C2DDF486BB909D8], 
 state=RUNNABLE, group=TGRP-TestDocValuesRewriteMethod]
[junit4]   2 at 
 org.apache.lucene.util.automaton.Operations.getLiveStates(Operations.java:900)
[junit4]   2 at 
 org.apache.lucene.util.automaton.Operations.hasDeadStates(Operations.java:389)
[junit4]   2 at 
 org.apache.lucene.util.automaton.Automata.makeString(Automata.java:517)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:579)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:519)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:510)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:495)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:466)
[junit4]   2 at 
 org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:109)
[junit4]   2 at 
 org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
[junit4]   2 at 
 org.apache.lucene.search.TestDocValuesRewriteMethod.assertSame(TestDocValuesRewriteMethod.java:117)
[junit4]   2 at 
 org.apache.lucene.search.TestDocValuesRewriteMethod.testRegexps(TestDocValuesRewriteMethod.java:109)
[junit4]   2 at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit4]   2 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[junit4]   2 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2 at java.lang.reflect.Method.invoke(Method.java:497)
[junit4]   2 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
  

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b71) - Build # 13376 - Still Failing!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13376/
Java: 32bit/jdk1.9.0-ea-b71 -client -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CollectionReloadTest

Error Message:
ERROR: SolrIndexSearcher opens=8 closes=7

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=8 closes=7
at __randomizedtesting.SeedInfo.seed([967FC38BDC8E744E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:472)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CollectionReloadTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionReloadTest: 1) Thread[id=971, 
name=qtp3724733-971, state=RUNNABLE, group=TGRP-CollectionReloadTest] 
at java.util.WeakHashMap.get(WeakHashMap.java:403) at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:101)
 at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:219)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:444)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:106)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
 at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) 
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)  
   at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
 at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)   
  at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
 at 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b21) - Build # 13194 - Failure!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13194/
Java: 64bit/jdk1.8.0_60-ea-b21 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock

Error Message:
1 thread leaked from TEST scope at 
testDeadlock(org.apache.lucene.codecs.TestCodecLoadingDeadlock): 1) 
Thread[id=214, name=processKiller-148-thread-1, state=RUNNABLE, 
group=TGRP-TestCodecLoadingDeadlock] at sun.misc.Unsafe.unpark(Native 
Method) at 
java.util.concurrent.locks.LockSupport.unpark(LockSupport.java:141) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.unparkSuccessor(AbstractQueuedSynchronizer.java:662)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1264)
 at 
java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457) 
at 
java.util.concurrent.ThreadPoolExecutor.tryTerminate(ThreadPoolExecutor.java:714)
 at 
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1007)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from TEST 
scope at testDeadlock(org.apache.lucene.codecs.TestCodecLoadingDeadlock): 
   1) Thread[id=214, name=processKiller-148-thread-1, state=RUNNABLE, 
group=TGRP-TestCodecLoadingDeadlock]
at sun.misc.Unsafe.unpark(Native Method)
at java.util.concurrent.locks.LockSupport.unpark(LockSupport.java:141)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.unparkSuccessor(AbstractQueuedSynchronizer.java:662)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1264)
at 
java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
at 
java.util.concurrent.ThreadPoolExecutor.tryTerminate(ThreadPoolExecutor.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1007)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at 
__randomizedtesting.SeedInfo.seed([EFEF4DFD71B68757:E284ACE977EC2A81]:0)




Build Log:
[...truncated 758 lines...]
   [junit4] Suite: org.apache.lucene.codecs.TestCodecLoadingDeadlock
   [junit4]   2 Jul 08, 2015 7:29:12 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2 SEVERE: 1 thread leaked from TEST scope at 
testDeadlock(org.apache.lucene.codecs.TestCodecLoadingDeadlock): 
   [junit4]   21) Thread[id=214, name=processKiller-148-thread-1, 
state=RUNNABLE, group=TGRP-TestCodecLoadingDeadlock]
   [junit4]   2 at sun.misc.Unsafe.unpark(Native Method)
   [junit4]   2 at 
java.util.concurrent.locks.LockSupport.unpark(LockSupport.java:141)
   [junit4]   2 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.unparkSuccessor(AbstractQueuedSynchronizer.java:662)
   [junit4]   2 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1264)
   [junit4]   2 at 
java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
   [junit4]   2 at 
java.util.concurrent.ThreadPoolExecutor.tryTerminate(ThreadPoolExecutor.java:714)
   [junit4]   2 at 
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1007)
   [junit4]   2 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
   [junit4]   2 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]   2 at java.lang.Thread.run(Thread.java:745)
   [junit4]   2 Jul 08, 2015 7:29:12 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
   [junit4]   2 INFO: Starting to interrupt leaked threads:
   [junit4]   21) Thread[id=214, name=processKiller-148-thread-1, 
state=TERMINATED, group={null group}]
   [junit4]   2 Jul 08, 2015 7:29:12 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
   [junit4]   2 INFO: All leaked threads terminated.
   [junit4] ERROR   0.81s J1 | TestCodecLoadingDeadlock.testDeadlock 
   [junit4] Throwable #1: 
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from TEST 
scope at testDeadlock(org.apache.lucene.codecs.TestCodecLoadingDeadlock): 
   [junit4]1) Thread[id=214, name=processKiller-148-thread-1, 
state=RUNNABLE, group=TGRP-TestCodecLoadingDeadlock]
   [junit4] at sun.misc.Unsafe.unpark(Native Method)
   [junit4] at 

Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b21) - Build # 13194 - Failure!

2015-07-08 Thread Dawid Weiss
I'll fix this one.

D.

On Wed, Jul 8, 2015 at 9:33 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13194/
 Java: 64bit/jdk1.8.0_60-ea-b21 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:  org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock

 Error Message:
 1 thread leaked from TEST scope at 
 testDeadlock(org.apache.lucene.codecs.TestCodecLoadingDeadlock): 1) 
 Thread[id=214, name=processKiller-148-thread-1, state=RUNNABLE, 
 group=TGRP-TestCodecLoadingDeadlock] at sun.misc.Unsafe.unpark(Native 
 Method) at 
 java.util.concurrent.locks.LockSupport.unpark(LockSupport.java:141) 
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.unparkSuccessor(AbstractQueuedSynchronizer.java:662)
  at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1264)
  at 
 java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)   
   at 
 java.util.concurrent.ThreadPoolExecutor.tryTerminate(ThreadPoolExecutor.java:714)
  at 
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1007)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

 Stack Trace:
 com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from TEST 
 scope at testDeadlock(org.apache.lucene.codecs.TestCodecLoadingDeadlock):
1) Thread[id=214, name=processKiller-148-thread-1, state=RUNNABLE, 
 group=TGRP-TestCodecLoadingDeadlock]
 at sun.misc.Unsafe.unpark(Native Method)
 at java.util.concurrent.locks.LockSupport.unpark(LockSupport.java:141)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.unparkSuccessor(AbstractQueuedSynchronizer.java:662)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1264)
 at 
 java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
 at 
 java.util.concurrent.ThreadPoolExecutor.tryTerminate(ThreadPoolExecutor.java:714)
 at 
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1007)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at 
 __randomizedtesting.SeedInfo.seed([EFEF4DFD71B68757:E284ACE977EC2A81]:0)




 Build Log:
 [...truncated 758 lines...]
[junit4] Suite: org.apache.lucene.codecs.TestCodecLoadingDeadlock
[junit4]   2 Jul 08, 2015 7:29:12 AM 
 com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
[junit4]   2 SEVERE: 1 thread leaked from TEST scope at 
 testDeadlock(org.apache.lucene.codecs.TestCodecLoadingDeadlock):
[junit4]   21) Thread[id=214, name=processKiller-148-thread-1, 
 state=RUNNABLE, group=TGRP-TestCodecLoadingDeadlock]
[junit4]   2 at sun.misc.Unsafe.unpark(Native Method)
[junit4]   2 at 
 java.util.concurrent.locks.LockSupport.unpark(LockSupport.java:141)
[junit4]   2 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.unparkSuccessor(AbstractQueuedSynchronizer.java:662)
[junit4]   2 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1264)
[junit4]   2 at 
 java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
[junit4]   2 at 
 java.util.concurrent.ThreadPoolExecutor.tryTerminate(ThreadPoolExecutor.java:714)
[junit4]   2 at 
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1007)
[junit4]   2 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
[junit4]   2 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[junit4]   2 at java.lang.Thread.run(Thread.java:745)
[junit4]   2 Jul 08, 2015 7:29:12 AM 
 com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
[junit4]   2 INFO: Starting to interrupt leaked threads:
[junit4]   21) Thread[id=214, name=processKiller-148-thread-1, 
 state=TERMINATED, group={null group}]
[junit4]   2 Jul 08, 2015 7:29:12 AM 
 com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
[junit4]   2 INFO: All leaked threads terminated.
[junit4] ERROR   0.81s J1 | TestCodecLoadingDeadlock.testDeadlock 
[junit4] Throwable #1: 
 com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from TEST 
 scope at testDeadlock(org.apache.lucene.codecs.TestCodecLoadingDeadlock):
[junit4]1) 

[jira] [Resolved] (LUCENE-6666) Thread pool's threads may escape in TestCodecLoadingDeadlock

2015-07-08 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-.
-
Resolution: Fixed

 Thread pool's threads may escape in TestCodecLoadingDeadlock
 

 Key: LUCENE-
 URL: https://issues.apache.org/jira/browse/LUCENE-
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 5.3, Trunk, 6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6666) Thread pool's threads may escape in TestCodecLoadingDeadlock

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618162#comment-14618162
 ] 

ASF subversion and git services commented on LUCENE-:
-

Commit 1689804 from [~dawidweiss] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689804 ]

LUCENE-: Thread pool's threads may escape in TestCodecLoadingDeadlock

 Thread pool's threads may escape in TestCodecLoadingDeadlock
 

 Key: LUCENE-
 URL: https://issues.apache.org/jira/browse/LUCENE-
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 5.3, Trunk, 6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6666) Thread pool's threads may escape in TestCodecLoadingDeadlock

2015-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618253#comment-14618253
 ] 

Uwe Schindler commented on LUCENE-:
---

Thanks!

 Thread pool's threads may escape in TestCodecLoadingDeadlock
 

 Key: LUCENE-
 URL: https://issues.apache.org/jira/browse/LUCENE-
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 5.3, Trunk, 6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by filters

2015-07-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618292#comment-14618292
 ] 

Michael McCandless commented on LUCENE-6667:


Hmm, {{SynonymFilter}} tries to preserve all attributes of the original 
incoming tokens (it uses {{capture/restoreState}} to do this).

But for the new tokens it inserts, it does use {{clearAttributes}} to make a 
completely blank slate, and then sets the term, offset, posInc/Length etc.

Which tokens (original input tokens vs. the inserted ones) are missing your 
custom attribute?

 Custom attributes get cleared by filters
 

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13201 - Still Failing!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13201/
Java: 64bit/jdk1.9.0-ea-b71 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberosAlt

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=2563, 
name=apacheds, state=WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:508) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=2567, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=2565, 
name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=2564, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=2566, 
name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 
   1) Thread[id=2563, name=apacheds, state=WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:508)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=2567, name=kdcReplayCache.data, state=TIMED_WAITING, 

[GitHub] lucene-solr pull request: Fix NPE in LukeRequestHandler in getAnal...

2015-07-08 Thread grossws
GitHub user grossws opened a pull request:

https://github.com/apache/lucene-solr/pull/185

Fix NPE in LukeRequestHandler in getAnalyzerInfo

NPE in thrown when luke tries to iterate 
`TokenizerChain#getCharFilterFactories()`
which is null if 2-arg `TokenizerChain` constructor is used.

Fixes SOLR-7765

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/grossws/lucene-solr fix-solr-7765

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #185


commit 7a0f08915e73f0ef28d4b9af8bd613612ca39946
Author: Konstantin Gribov gros...@gmail.com
Date:   2015-07-08T16:30:04Z

Fix NPE in LukeRequestHandler in getAnalyzerInfo

NPE in thrown when luke tries to iterate 
`TokenizerChain#getCharFilterFactories()`
which is null if 2-arg `TokenizerChain` constructor is used.

Fixes SOLR-7765




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [CI] Lucene 5x Linux 64 Test Only - Build # 54928 - Failure!

2015-07-08 Thread Shai Erera
I failed to reproduce using IBM J9 1.8.0. I didn't beast it though, just
ran the reproduce with line several times. Do you know the specific 1.8
version this build used? I'm using the latest, so could be it's a bug that
was fixed.

Shai

On Wed, Jul 8, 2015 at 6:12 PM, Michael McCandless m...@elastic.co wrote:

 I tried to beast with Oracle JDK 1.8.0_45 but couldn't repro ... maybe
 this is IBM J9 specific?

 Mike McCandless

 On Wed, Jul 8, 2015 at 10:59 AM, bu...@elastic.co wrote:

   *BUILD FAILURE*
 Build URL
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/
 Project:lucene_linux_java8_64_test_only Randomization: 
 JDKIBM8,network,heap[1024m],-server
 +UseParallelGC +UseCompressedOops,sec manager on Date of build:Wed, 08
 Jul 2015 16:56:37 +0200 Build duration:2 min 35 sec
  *CHANGES* No Changes
  *BUILD ARTIFACTS*
 checkout/lucene/build/core/test/temp/junit4-J0-20150708_165702_515.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J0-20150708_165702_515.events
 checkout/lucene/build/core/test/temp/junit4-J1-20150708_165702_515.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J1-20150708_165702_515.events
 checkout/lucene/build/core/test/temp/junit4-J2-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J2-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J3-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J3-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J4-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J4-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J5-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J5-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J6-20150708_165702_516.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J6-20150708_165702_516.events
 checkout/lucene/build/core/test/temp/junit4-J7-20150708_165702_515.events
 http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54928/artifact/checkout/lucene/build/core/test/temp/junit4-J7-20150708_165702_515.events
  *FAILED JUNIT TESTS* Name: org.apache.lucene.index Failed: 1 test(s),
 Passed: 786 test(s), Skipped: 25 test(s), Total: 812 test(s)
 *Failed: org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics *
  *CONSOLE OUTPUT* [...truncated 2181 lines...] [junit4] Tests with
 failures: [junit4] -
 org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics [junit4]
 [junit4]  [junit4] JVM J0: 1.17 .. 128.68 = 127.51s [junit4] JVM J1:
 1.16 .. 128.17 = 127.00s [junit4] JVM J2: 0.94 .. 129.06 = 128.12s [junit4]
 JVM J3: 1.43 .. 128.52 = 127.09s [junit4] JVM J4: 1.17 .. 130.66 =
 129.49s [junit4] JVM J5: 1.43 .. 128.41 = 126.98s [junit4] JVM J6: 0.93
 .. 130.79 = 129.86s [junit4] JVM J7: 1.18 .. 128.83 = 127.65s [junit4]
 Execution time total: 2 minutes 10 seconds [junit4] Tests summary: 405
 suites, 3275 tests, 1 error, 53 ignored (49 assumptions) BUILD FAILED 
 /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:50:
 The following error occurred while executing this line: 
 /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1444:
 The following error occurred while executing this line: 
 /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:999:
 There were test failures: 405 suites, 3275 tests, 1 error, 53 ignored (49
 assumptions) Total time: 2 minutes 22 seconds Build step 'Invoke Ant'
 marked build as failure Archiving artifacts Recording test results 
 [description-setter]
 Description set: JDKIBM8,network,heap[1024m],-server +UseParallelGC
 +UseCompressedOops,sec manager on Email was triggered for: Failure - 1st 
 Trigger
 Failure - Any was overridden by another trigger and will not send an email. 
 Trigger
 Failure - Still was overridden by another trigger and will not send an
 email. Sending email for trigger: Failure - 1st





[jira] [Created] (SOLR-7765) TokenizerChain without char filters cause NPE in luke request handler

2015-07-08 Thread Konstantin Gribov (JIRA)
Konstantin Gribov created SOLR-7765:
---

 Summary: TokenizerChain without char filters cause NPE in luke 
request handler
 Key: SOLR-7765
 URL: https://issues.apache.org/jira/browse/SOLR-7765
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2.1
Reporter: Konstantin Gribov
Priority: Minor


{{TokenizerChain}} created using 2-arg constructor has {{null}} in 
{{charFilters}}, so {{LukeRequestHandler}} throws NPE on iterating it.

Will create PR in a couple of minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5756) A utility API to move collections from internal to external

2015-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618910#comment-14618910
 ] 

Noble Paul commented on SOLR-5756:
--

bq.Do you think it would be safe to do this manually on a running cluster?

It is never fully safe , There are intermediate state changes which you may lose

bq. Suppose I wanted to try to write a patch for this issue to help solve it 
for everyone, is that a reasonable thing to attempt for someone with a lot of 
ZK knowledge but pretty new to Solr? 

bq. Is it too soon to do that?

It is not too soon. 5.0 is out for a while

A lot of ZK knowledge is not required to do this. You need to know how overseer 
updates state  and how states are updated by 
{{ZkStateReader.createClusterStateWatchersAndUpdate}}  in each node

bq.Can you opine on the specifics of having an API to move the state out vs. a 
forced migration?

I shall do a follow up comment of what are the steps involved in actually doing 
this



 A utility API to move collections from internal to external
 ---

 Key: SOLR-5756
 URL: https://issues.apache.org/jira/browse/SOLR-5756
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 SOLR-5473 allows creation of collection with state stored outside of 
 clusterstate.json. We would need an API to  move existing 'internal' 
 collections outside



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6629) Random 7200 seconds build timeouts / infinite loops in Lucene tests?

2015-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618922#comment-14618922
 ] 

Uwe Schindler commented on LUCENE-6629:
---

Policeman Jenkins has Java 9 EA b71 already.

 Random 7200 seconds build timeouts / infinite loops in Lucene tests?
 

 Key: LUCENE-6629
 URL: https://issues.apache.org/jira/browse/LUCENE-6629
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: 54457_consoleText.txt


 I'm not sure what's going on here, but every so often a Jenkins run will fail 
 with a build timeout (7200 seconds) with stack traces that do not look like 
 deadlock.  They never reproduce for me.
 We really need to improve test infra here, so that each HEARTBEAT we also got 
 1) full thread stacks and 2) total CPU usage of the process as reported by 
 the ManagementBean APIs ... this would shed more light on whether the JVM is 
 somehow hung vs our bug (infinite loop).  But infinite loop seems most likely 
 ... the stacks always seem to be somewhere spooky.
 We should try to gather recent Jenkins runs where this is happening, here, to 
 see if there are patterns / we can get to the root cause.
 Anyway, this happened to me on my old beast box, which runs the nightly ant 
 test times graphs: 
 http://people.apache.org/~mikemccand/lucenebench/antcleantest.html
 The 2015/06/27 data point is missing because it failed with this timeout:
 {noformat}
[junit4] Suite: org.apache.lucene.search.TestDocValuesRewriteMethod
[junit4]   2 ??? 28, 2015 7:01:29 ? 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
[junit4]   2 WARNING: Suite execution timed out: 
 org.apache.lucene.search.TestDocValuesRewriteMethod
[junit4]   21) Thread[id=1, name=main, state=WAITING, group=main]
[junit4]   2 at java.lang.Object.wait(Native Method)
[junit4]   2 at java.lang.Thread.join(Thread.java:1245)
[junit4]   2 at java.lang.Thread.join(Thread.java:1319)
[junit4]   2 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:578)
[junit4]   2 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:444)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:199)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:310)
[junit4]   2 at 
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
[junit4]   22) Thread[id=213, 
 name=TEST-TestDocValuesRewriteMethod.testRegexps-seed#[C2DDF486BB909D8], 
 state=RUNNABLE, group=TGRP-TestDocValuesRewriteMethod]
[junit4]   2 at 
 org.apache.lucene.util.automaton.Operations.getLiveStates(Operations.java:900)
[junit4]   2 at 
 org.apache.lucene.util.automaton.Operations.hasDeadStates(Operations.java:389)
[junit4]   2 at 
 org.apache.lucene.util.automaton.Automata.makeString(Automata.java:517)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:579)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:519)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:510)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:495)
[junit4]   2 at 
 org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:466)
[junit4]   2 at 
 org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:109)
[junit4]   2 at 
 org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
[junit4]   2 at 
 org.apache.lucene.search.TestDocValuesRewriteMethod.assertSame(TestDocValuesRewriteMethod.java:117)
[junit4]   2 at 
 org.apache.lucene.search.TestDocValuesRewriteMethod.testRegexps(TestDocValuesRewriteMethod.java:109)
[junit4]   2 at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit4]   2 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[junit4]   2 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2 at java.lang.reflect.Method.invoke(Method.java:497)
[junit4]   2 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
[junit4]   2 at 
 

[jira] [Commented] (SOLR-7764) Solr indexing hangs if encounters an certain XML parse error

2015-07-08 Thread Sorin Gheorghiu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618935#comment-14618935
 ] 

Sorin Gheorghiu commented on SOLR-7764:
---

Yes, Solr can index other documents and it really hang at a XML file, thus I 
have to kill the related processes:
/bin/bash /opt/lucene-search-2.1.3/lsearchd
java 
-Djava.rmi.server.codebase=file:///opt/lucene-search-2.1.3/LuceneSearch.jar 
-Djava.rmi.server.hostname=WikiTestVZ -jar 
/opt/lucene-search-2.1.3/LuceneSearch.jar

The XML file is not corrupted, because it can be opened with Excel (but 
probably contains unexpected characters for XMLParser).
My expectation Solr should skip any indexing file when certain exceptions occur 
and continue with next files, but hung.

P.S. Sorry, next time I will use the user's list first 
(solr-u...@lucene.apache.org right?)



 Solr indexing hangs if encounters an certain XML parse error
 

 Key: SOLR-7764
 URL: https://issues.apache.org/jira/browse/SOLR-7764
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.7.2
 Environment: Ubuntu 12.04.5 LTS
Reporter: Sorin Gheorghiu
  Labels: indexing
 Attachments: Solr_XML_parse_error_080715.txt


 BlueSpice (http://bluespice.com/) uses Solr to index documents for the 
 'Extended search' feature.
 Solr hangs if during indexing certain error occurs:
 8.7.2015 15:34:26
 ERROR
 SolrCore
 org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error
 8.7.2015 15:34:26
 ERROR
 SolrDispatchFilter
 null:org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6667) Custom attributes get cleared by SynonymFilter

2015-07-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618951#comment-14618951
 ] 

David Smiley commented on LUCENE-6667:
--

Yeah, it could be appropriate or it could be inappropriate; there is no 
API/mechanism for an attribute to specify what it wants.  I think it's better 
to err on copying the attribute state for inserted tokens from the (first) 
input token, versus dropping attribute state.  It needn't be done in an 
inconsistent way -- we could do it in a documented consistent way -- such as 
take from the first token.  I've had to work around this problem (also in 
WordDelimiterFilter, and CommonGrams) with ugly hacks in a custom attribute -- 
e.g. the clear() method not actually clearing, with my custom Tokenizer 
controlling the actual clearing.

As an aside, when I was doing some custom attribute stuff, I couldn't help but 
think our approach to saving attribute state seemed a little heavy, since a 
call to capture() creates a linked list to hold each attribute impl.  Maybe the 
state could be re-used with some ref-counting.  I dunno.

 Custom attributes get cleared by SynonymFilter
 --

 Key: LUCENE-6667
 URL: https://issues.apache.org/jira/browse/LUCENE-6667
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Oliver Becker

 I believe the Lucene API enables users to define their custom attributes (by 
 extending {{AttributeImpl}}) which may be added by custom Tokenizers. 
 It seems, the {{clear}} and {{copyTo}} methods must be implemented to clear 
 and restore the state of this custom attribute.
 However, some filters (in our case the SynonymFilter) simply call 
 {{AttributeSource.clearAttributes}} without invoking {{copyTo}}. Instead the 
 filter just resets some known attributes, simply ignoring all other custom 
 attributes. In the end our custom attribute value is lost.
 Is this a bug in {{SynonymFilter}} (and others) or are we using the API in 
 the wrong way?
 A solution might be of course to provide empty implementations of {{clear}} 
 and {{copyTo}}, but I'm not sure if this has other unwanted effects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13200 - Failure!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13200/
Java: 64bit/jdk1.9.0-ea-b71 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testArabicPDF

Error Message:
Invalid Date String:'Tue Mar 09 13:44:49 GMT+07:00 2010'

Stack Trace:
org.apache.solr.common.SolrException: Invalid Date String:'Tue Mar 09 13:44:49 
GMT+07:00 2010'
at 
__randomizedtesting.SeedInfo.seed([232D0A5404C2ADED:4DEB715B070706B8]:0)
at 
org.apache.solr.schema.TrieDateField.parseMath(TrieDateField.java:150)
at org.apache.solr.schema.TrieField.createField(TrieField.java:657)
at org.apache.solr.schema.TrieField.createFields(TrieField.java:694)
at 
org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:981)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(ExtractingDocumentLoader.java:122)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.addDoc(ExtractingDocumentLoader.java:127)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:230)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2058)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:339)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocalFromHandler(ExtractingRequestHandlerTest.java:737)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocal(ExtractingRequestHandlerTest.java:744)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testArabicPDF(ExtractingRequestHandlerTest.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 

[jira] [Commented] (SOLR-7765) TokenizerChain without char filters cause NPE in luke request handler

2015-07-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618905#comment-14618905
 ] 

ASF GitHub Bot commented on SOLR-7765:
--

GitHub user grossws opened a pull request:

https://github.com/apache/lucene-solr/pull/185

Fix NPE in LukeRequestHandler in getAnalyzerInfo

NPE in thrown when luke tries to iterate 
`TokenizerChain#getCharFilterFactories()`
which is null if 2-arg `TokenizerChain` constructor is used.

Fixes SOLR-7765

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/grossws/lucene-solr fix-solr-7765

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #185


commit 7a0f08915e73f0ef28d4b9af8bd613612ca39946
Author: Konstantin Gribov gros...@gmail.com
Date:   2015-07-08T16:30:04Z

Fix NPE in LukeRequestHandler in getAnalyzerInfo

NPE in thrown when luke tries to iterate 
`TokenizerChain#getCharFilterFactories()`
which is null if 2-arg `TokenizerChain` constructor is used.

Fixes SOLR-7765




 TokenizerChain without char filters cause NPE in luke request handler
 -

 Key: SOLR-7765
 URL: https://issues.apache.org/jira/browse/SOLR-7765
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2.1
Reporter: Konstantin Gribov
Priority: Minor

 {{TokenizerChain}} created using 2-arg constructor has {{null}} in 
 {{charFilters}}, so {{LukeRequestHandler}} throws NPE on iterating it.
 Will create PR in a couple of minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7764) Solr indexing hangs if encounters an certain XML parse error

2015-07-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618953#comment-14618953
 ] 

Erick Erickson commented on SOLR-7764:
--

bq: Sorry, next time I will use the user's list first

NP. What's really not clear is whether this is a Solr or Tika problem.

If at all possible, can you post the xml file and schema? Or is the 
information private?

 Solr indexing hangs if encounters an certain XML parse error
 

 Key: SOLR-7764
 URL: https://issues.apache.org/jira/browse/SOLR-7764
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.7.2
 Environment: Ubuntu 12.04.5 LTS
Reporter: Sorin Gheorghiu
  Labels: indexing
 Attachments: Solr_XML_parse_error_080715.txt


 BlueSpice (http://bluespice.com/) uses Solr to index documents for the 
 'Extended search' feature.
 Solr hangs if during indexing certain error occurs:
 8.7.2015 15:34:26
 ERROR
 SolrCore
 org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error
 8.7.2015 15:34:26
 ERROR
 SolrDispatchFilter
 null:org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Chris Hostetter

2 nearly identical failures from policeman immediatley after upgrading  
jdk1.9.0 to b71 ... seems like more then a coincidence.

ant test  -Dtestcase=ExtractingRequestHandlerTest -Dtests.method=testArabicPDF 
-Dtests.seed=232D0A5404C2ADED -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=en_JM -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

ant test  -Dtestcase=ExtractingRequestHandlerTest -Dtests.method=testArabicPDF 
-Dtests.seed=F05E63CDB3511062 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=kk -Dtests.timezone=Asia/Hebron -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

...I haven't dug into the details of the test, but neither of those 
commands reproduce the failure for me on Java7 or Java8 -- can someone 
with a few Java9 builds try them out on both b70 (or earlier) and b71 to 
try and confirm if this is definitely a new failure in b71 ?



: Date: Wed, 8 Jul 2015 14:01:17 + (UTC)
: From: Policeman Jenkins Server jenk...@thetaphi.de
: Reply-To: dev@lucene.apache.org
: To: sh...@apache.org, dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
: 13198 - Still Failing!
: 
: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13198/
: Java: 64bit/jdk1.9.0-ea-b71 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
: 
: 4 tests failed.
: FAILED:  
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testArabicPDF
: 
: Error Message:
: Invalid Date String:'Tue Mar 09 08:44:49 EEST 2010'
: 
: Stack Trace:
: org.apache.solr.common.SolrException: Invalid Date String:'Tue Mar 09 
08:44:49 EEST 2010'
:   at 
__randomizedtesting.SeedInfo.seed([F05E63CDB3511062:9E9818C2B094BB37]:0)
:   at 
org.apache.solr.schema.TrieDateField.parseMath(TrieDateField.java:150)
:   at org.apache.solr.schema.TrieField.createField(TrieField.java:657)
:   at org.apache.solr.schema.TrieField.createFields(TrieField.java:694)
:   at 
org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
:   at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
:   at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
:   at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
:   at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
:   at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
:   at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
:   at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:981)
:   at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
:   at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
:   at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(ExtractingDocumentLoader.java:122)
:   at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.addDoc(ExtractingDocumentLoader.java:127)
:   at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:230)
:   at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
:   at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
:   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2058)
:   at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:339)
:   at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocalFromHandler(ExtractingRequestHandlerTest.java:737)
:   at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocal(ExtractingRequestHandlerTest.java:744)
:   at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testArabicPDF(ExtractingRequestHandlerTest.java:526)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:502)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
:   at 

RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Uwe Schindler
Hi Hoss,

it does not fail with build 70, but suddenly fails with build 71.

The major change in build 71 is (next to the fix for the nasty arraycopy bug), 
that the JDK now uses a completely different Locale database by default (CLDR):
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8008577
http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/22f901cf304f

This fails with any random seed. To me it looks like the Solr Trie Date field 
parser seem to have problems with the new locale data. I have no idea how it 
parses the date (simpledateformat?) and if this is really Locale-Independent.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Wednesday, July 08, 2015 10:57 PM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
 13198 - Still Failing!
 
 Hi Hoss,
 
 It is strange that only this single test fails. The date mentioned looks 
 good. I
 have no idea if DIH uses JDK date parsing, maybe there is a new bug.
 The last JDK , Policeman used was b60, my last try with pervious b70 was
 failed horrible and did not even reach Solr (arraycopy bugs), so the bug may
 have introduced in b61 or later.
 
 Uwe
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
  Sent: Wednesday, July 08, 2015 7:42 PM
  To: dev@lucene.apache.org
  Cc: sh...@apache.org
  Subject: Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
  Build #
  13198 - Still Failing!
 
 
  2 nearly identical failures from policeman immediatley after upgrading
  jdk1.9.0 to b71 ... seems like more then a coincidence.
 
  ant test  -Dtestcase=ExtractingRequestHandlerTest -
  Dtests.method=testArabicPDF -Dtests.seed=232D0A5404C2ADED -
  Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en_JM -
  Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true
  -Dtests.file.encoding=UTF-
  8
 
  ant test  -Dtestcase=ExtractingRequestHandlerTest -
  Dtests.method=testArabicPDF -Dtests.seed=F05E63CDB3511062 -
  Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=kk -
  Dtests.timezone=Asia/Hebron -Dtests.asserts=true -
  Dtests.file.encoding=UTF-8
 
  ...I haven't dug into the details of the test, but neither of those
  commands reproduce the failure for me on Java7 or Java8 -- can someone
  with a few
  Java9 builds try them out on both b70 (or earlier) and b71 to try and
  confirm if this is definitely a new failure in b71 ?
 
 
 
  : Date: Wed, 8 Jul 2015 14:01:17 + (UTC)
  : From: Policeman Jenkins Server jenk...@thetaphi.de
  : Reply-To: dev@lucene.apache.org
  : To: sh...@apache.org, dev@lucene.apache.org
  : Subject: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
  : 13198 - Still Failing!
  :
  : Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13198/
  : Java: 64bit/jdk1.9.0-ea-b71 -XX:+UseCompressedOops -
  XX:+UseConcMarkSweepGC
  :
  : 4 tests failed.
  : FAILED:
  org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testAr
  abic
  PDF
  :
  : Error Message:
  : Invalid Date String:'Tue Mar 09 08:44:49 EEST 2010'
  :
  : Stack Trace:
  : org.apache.solr.common.SolrException: Invalid Date String:'Tue Mar
  09
  08:44:49 EEST 2010'
  :   at
 
 __randomizedtesting.SeedInfo.seed([F05E63CDB3511062:9E9818C2B094BB3
  7]:0)
  :   at
  org.apache.solr.schema.TrieDateField.parseMath(TrieDateField.java:150)
  :   at org.apache.solr.schema.TrieField.createField(TrieField.java:657)
  :   at org.apache.solr.schema.TrieField.createFields(TrieField.java:694)
  :   at
 
 org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:4
  8
  )
  :   at
 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.jav
  a:123)
  :   at
 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpd
  ateCommand.java:83)
  :   at
 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandle
  r2.java:237)
  :   at
 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler
  2.java:163)
  :   at
 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUp
  dateProcessorFactory.java:69)
  :   at
 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(Up
  dateRequestProcessor.java:51)
  :   at
  org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd
  (
  DistributedUpdateProcessor.java:981)
  :   at
 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd
  (
  DistributedUpdateProcessor.java:706)
  :   at
 
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpd
  ateProcessorFactory.java:104)
  :   at
  org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(Extr
  ac
  tingDocumentLoader.java:122)
  :   at
  

[jira] [Commented] (SOLR-6234) Scoring modes for query time join

2015-07-08 Thread Ryan Josal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619385#comment-14619385
 ] 

Ryan Josal commented on SOLR-6234:
--

That will be great Tim! For my personal use case, I added a feature to the 
qparser that, after constructing the LocalSolrQueryRequest, would apply the 
handler config (defaults, appends, invariants) from the otherCore (say 
/select handler) to the request so that in a nutshell your other query could 
be a simple string and it would follow whatever edismax rules you had 
configured for the other core.  That instead of the more strict db style lucene 
query syntax.  If there's interest I am happy to share the patch.

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 5.3
Reporter: Mikhail Khludnev
Assignee: Timothy Potter
  Labels: features, patch, test
 Fix For: 5.3

 Attachments: SOLR-6234.patch, SOLR-6234.patch


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil)
  - {{score=none}} is *default*, eg if you *omit* this localparam 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 - {{multiVals=true|false}} is introduced 
 - there is a test coverage for cross core join case. 
 - so far it joins string and multivalue string fields (Sorted, SortedSet, 
 Binary), but not Numerics DVs. follow-up LUCENE-5868  
 -there was a bug in cross core join, however there is a workaround for it- 
 it's fixed in Dec'14 patch.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Chris Hostetter

My guess (haven't verified) is that if you dig into the 
ExtractionRequestHandler code, you will probably find it parsing Tika 
Metadata Strings (All Tika metadata are simple Strings, correct?) into 
Date objects if/when it can/should based on the field type -- and when it 
can't it propogates the metadata string as is (so that, for example, a 
subsequent update processor can parse it)

The error from TrieDateField is probably just because a completely 
untouched metadta string (coming from Tika) is not a Date (or a correctly 
ISO formatted string) ... leaving the big question being: what changed in 
the JDK such that that tika metadta is no longer being parsed properly 
into a Date in the first place?

(NOTE: I'm speculating in all of this based on what little i remember 
about Tika)


: Date: Wed, 8 Jul 2015 23:35:52 +0200
: From: Uwe Schindler u...@thetaphi.de
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
:  13198 - Still Failing!
: 
: In fact this is really strange! The Exception is thrown because the 
DateMathParser does not find the Z inside the date string - it just complains 
that its not ISO8601... To me it looks like although it should only get ISO8601 
date, somehow extracting contenthandler sends a default formatted date to the 
importer, obviously the one from the input document - maybe TIKA exports it 
incorrectly...
: 
: Unfortunately I have no idea how to explain this, the bug looks like being in 
contrib/extraction. Importing standard documents with ISO8601 date to Solr 
works perfectly fine! I have to setup debugging in Eclipse with Java 9!
: 
: Uwe
: 
: -
: Uwe Schindler
: H.-H.-Meier-Allee 63, D-28213 Bremen
: http://www.thetaphi.de
: eMail: u...@thetaphi.de
: 
: 
:  -Original Message-
:  From: Uwe Schindler [mailto:u...@thetaphi.de]
:  Sent: Wednesday, July 08, 2015 11:19 PM
:  To: dev@lucene.apache.org
:  Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - 
Build #
:  13198 - Still Failing!
:  
:  Hi Hoss,
:  
:  it does not fail with build 70, but suddenly fails with build 71.
:  
:  The major change in build 71 is (next to the fix for the nasty arraycopy 
bug),
:  that the JDK now uses a completely different Locale database by default
:  (CLDR):
:  http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8008577
:  http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/22f901cf304f
:  
:  This fails with any random seed. To me it looks like the Solr Trie Date 
field
:  parser seem to have problems with the new locale data. I have no idea how
:  it parses the date (simpledateformat?) and if this is really Locale-
:  Independent.
:  
:  Uwe
:  
:  -
:  Uwe Schindler
:  H.-H.-Meier-Allee 63, D-28213 Bremen
:  http://www.thetaphi.de
:  eMail: u...@thetaphi.de
:  
:  
:   -Original Message-
:   From: Uwe Schindler [mailto:u...@thetaphi.de]
:   Sent: Wednesday, July 08, 2015 10:57 PM
:   To: dev@lucene.apache.org
:   Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - 
Build
:  #
:   13198 - Still Failing!
:  
:   Hi Hoss,
:  
:   It is strange that only this single test fails. The date mentioned looks 
good. I
:   have no idea if DIH uses JDK date parsing, maybe there is a new bug.
:   The last JDK , Policeman used was b60, my last try with pervious b70 was
:   failed horrible and did not even reach Solr (arraycopy bugs), so the bug 
may
:   have introduced in b61 or later.
:  
:   Uwe
:   -
:   Uwe Schindler
:   H.-H.-Meier-Allee 63, D-28213 Bremen
:   http://www.thetaphi.de
:   eMail: u...@thetaphi.de
:  
:  
:-Original Message-
:From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
:Sent: Wednesday, July 08, 2015 7:42 PM
:To: dev@lucene.apache.org
:Cc: sh...@apache.org
:Subject: Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
:Build #
:13198 - Still Failing!
:   
:   
:2 nearly identical failures from policeman immediatley after upgrading
:jdk1.9.0 to b71 ... seems like more then a coincidence.
:   
:ant test  -Dtestcase=ExtractingRequestHandlerTest -
:Dtests.method=testArabicPDF -Dtests.seed=232D0A5404C2ADED -
:Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en_JM -
:Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true
:-Dtests.file.encoding=UTF-
:8
:   
:ant test  -Dtestcase=ExtractingRequestHandlerTest -
:Dtests.method=testArabicPDF -Dtests.seed=F05E63CDB3511062 -
:Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=kk -
:Dtests.timezone=Asia/Hebron -Dtests.asserts=true -
:Dtests.file.encoding=UTF-8
:   
:...I haven't dug into the details of the test, but neither of those
:commands reproduce the failure for me on Java7 or Java8 -- can
:  someone
:with a few
:Java9 builds try them out on both b70 (or earlier) and b71 to try and
:confirm if this is definitely a new failure in b71 ?
:   
:   
:   
:: 

[JENKINS] Lucene-Artifacts-5.x - Build # 899 - Failure

2015-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-5.x/899/

No tests ran.

Build Log:
[...truncated 296 lines...]
ERROR: Failed to update 
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/branches/branch_5x failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1030)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1011)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:987)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2474)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/branches/branch_5x failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 35 more
Caused by: org.tmatesoft.svn.core.SVNAuthenticationException: svn: E170001: 
OPTIONS request failed on '/repos/asf/lucene/dev/branches/branch_5x'
svn: E170001: OPTIONS of '/repos/asf/lucene/dev/branches/branch_5x': 403 
Forbidden (http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:62)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:771)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 34 more
Caused by: svn: E170001: OPTIONS of '/repos/asf/lucene/dev/branches/branch_5x': 
403 Forbidden (http://svn.apache.org)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:189)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:141)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 171 - Failure

2015-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/171/

No tests ran.

Build Log:
[...truncated 301 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/trunk
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1030)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1011)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:987)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2474)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 35 more
Caused by: org.tmatesoft.svn.core.SVNAuthenticationException: svn: E170001: 
OPTIONS request failed on '/repos/asf/lucene/dev/trunk'
svn: E170001: OPTIONS of '/repos/asf/lucene/dev/trunk': 403 Forbidden 
(http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:62)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:771)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 34 more
Caused by: svn: E170001: OPTIONS of '/repos/asf/lucene/dev/trunk': 403 
Forbidden (http://svn.apache.org)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:189)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:141)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.createDefaultErrorMessage(HTTPRequest.java:455)
at 

[jira] [Comment Edited] (SOLR-7764) Solr indexing hangs if encounters an certain XML parse error

2015-07-08 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619187#comment-14619187
 ] 

Tim Allison edited comment on SOLR-7764 at 7/8/15 7:22 PM:
---

Y, how are you getting an xml exception _and_ a permanent hang?  Are you sure 
that stacktrace is related to the hang?

I agree w Erick...if you can submit the triggering file, that'd be best..

We're aware of one potential xml hang: TIKA-1401.

Unfortunately, Tika will hang permanently.  I would be _really_ surprised if 
this were a Solr problem.  Permanent hangs w Tika happens rarely, and we try to 
fix problems when they happen.  It is best to run Tika in a separate jvm either 
via tika-server, ForkParser, tika-batch or role your own hadoop-wrapper, or 
stay tuned for: SOLR-7632


was (Author: talli...@mitre.org):
Y, how are you getting an xml exception _and_ a permanent hang?  Are you sure 
that stacktrace is related to the hang?

I agree w Erick...if you can submit the triggering file, that'd be best..

We're aware of one potential xml hang: TIKA-1401.

Unfortunately, Tika will hang permanently.  I would be _really_ surprised if 
this were a Solr problem.  Permanent hangs w Tika happens rarely, and we try to 
fix problems when they happen.  It is best to run Tika in a separate jvm either 
via tika-server, ForkParser, tika-batch or role your own hadoop-wrapper.

 Solr indexing hangs if encounters an certain XML parse error
 

 Key: SOLR-7764
 URL: https://issues.apache.org/jira/browse/SOLR-7764
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.7.2
 Environment: Ubuntu 12.04.5 LTS
Reporter: Sorin Gheorghiu
  Labels: indexing
 Attachments: Solr_XML_parse_error_080715.txt


 BlueSpice (http://bluespice.com/) uses Solr to index documents for the 
 'Extended search' feature.
 Solr hangs if during indexing certain error occurs:
 8.7.2015 15:34:26
 ERROR
 SolrCore
 org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error
 8.7.2015 15:34:26
 ERROR
 SolrDispatchFilter
 null:org.apache.solr.common.SolrException: 
 org.apache.tika.exception.TikaException: XML parse error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6595) CharFilter offsets correction is wonky

2015-07-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619292#comment-14619292
 ] 

Michael McCandless commented on LUCENE-6595:


Thanks [~caomanhdat]!

Is this the failing case if you pass {{off}} instead of {{0}} to 
{{addOffCorrectMap}}?

{noformat}
@@ -215,7 +230,8 @@
 };
 
 int numRounds = RANDOM_MULTIPLIER * 1;
-checkRandomData(random(), analyzer, numRounds);
+//checkRandomData(random(), analyzer, numRounds);
+checkAnalysisConsistency(random(),analyzer,true,m?(y ');
 analyzer.close();
   }
{noformat}

Best to add {{// nocommit}} comment when making such temporary changes... and 
it's spooky the test fails because with the right default here (hmm maybe it 
should be {{off + cumulativeDiff}} since it's an input offset, it should behave 
exactly has before?

Can you mark the old {{addCorrectMap}} as deprecated?  We can remove that in 
trunk but leave deprecated in 5.x ... seems like any subclasses here really 
need to tell us the input offset...

For the default impl for {{CharFilter.correctEnd}} should we just use 
{{CharFilter.correct}}?

Can we rename correctOffset -- correctStartOffset now that we also have a 
correctEndOffset?

Does {{(correctOffset(endOffset-1)+1)}} not work?  It would be nice not to add 
the new method to {{CharFilter}} (only to {{Tokenizer}}).

 CharFilter offsets correction is wonky
 --

 Key: LUCENE-6595
 URL: https://issues.apache.org/jira/browse/LUCENE-6595
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: LUCENE-6595.patch, LUCENE-6595.patch, LUCENE-6595.patch


 Spinoff from this original Elasticsearch issue: 
 https://github.com/elastic/elasticsearch/issues/11726
 If I make a MappingCharFilter with these mappings:
 {noformat}
   ( - 
   ) - 
 {noformat}
 i.e., just erase left and right paren, then tokenizing the string
 (F31) with e.g. WhitespaceTokenizer, produces a single token F31,
 with start offset 1 (good).
 But for its end offset, I would expect/want 4, but it produces 5
 today.
 This can be easily explained given how the mapping works: each time a
 mapping rule matches, we update the cumulative offset difference,
 conceptually as an array like this (it's encoded more compactly):
 {noformat}
   Output offset: 0 1 2 3
Input offset: 1 2 3 5
 {noformat}
 When the tokenizer produces F31, it assigns it startOffset=0 and
 endOffset=3 based on the characters it sees (F, 3, 1).  It then asks
 the CharFilter to correct those offsets, mapping them backwards
 through the above arrays, which creates startOffset=1 (good) and
 endOffset=5 (bad).
 At first, to fix this, I thought this is an off-by-1 and when
 correcting the endOffset we really should return
 1+correct(outputEndOffset-1), which would return the correct value (4)
 here.
 But that's too naive, e.g. here's another example:
 {noformat}
    - cc
 {noformat}
 If I then tokenize , today we produce the correct offsets (0, 4)
 but if we do this off-by-1 fix for endOffset, we would get the wrong
 endOffset (2).
 I'm not sure what to do here...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-5.x - Build # 878 - Failure

2015-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-5.x/878/

No tests ran.

Build Log:
[...truncated 303 lines...]
ERROR: Failed to update 
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/branches/branch_5x failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1030)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1011)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:987)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2474)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/branches/branch_5x failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 35 more
Caused by: org.tmatesoft.svn.core.SVNAuthenticationException: svn: E170001: 
OPTIONS request failed on '/repos/asf/lucene/dev/branches/branch_5x'
svn: E170001: OPTIONS of '/repos/asf/lucene/dev/branches/branch_5x': 403 
Forbidden (http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:62)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:771)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 34 more
Caused by: svn: E170001: OPTIONS of '/repos/asf/lucene/dev/branches/branch_5x': 
403 Forbidden (http://svn.apache.org)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:189)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:141)
at 

RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Uwe Schindler
Hi,

It is exactly like that. The contenthandler tries to parse all metadata strings 
as date using DateUtil.parseDate() and if that fails, it leaves them unchanged. 
In JDK 8 it succeeds to parse the date and then converts it to ISO-8601 Solr 
format. With JDK 9 b71 it passes the plain string in the TIKA format (it uses 
HTTP Cookie-like dates in TIKA), which then fails in TrieDateField.

Unfortunately, I was not able to setup Eclipse with JDK 9, so I cannot debug 
this. I just did this is JDK 8 to verify how it works...

We should add a testcase for DateUtil and then try to find out how this fails 
in JDK 9. It looks like this class is completely untested with strange date 
formats.
Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
 Sent: Wednesday, July 08, 2015 11:46 PM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
 13198 - Still Failing!
 
 
 My guess (haven't verified) is that if you dig into the
 ExtractionRequestHandler code, you will probably find it parsing Tika
 Metadata Strings (All Tika metadata are simple Strings, correct?) into
 Date objects if/when it can/should based on the field type -- and when it
 can't it propogates the metadata string as is (so that, for example, a
 subsequent update processor can parse it)
 
 The error from TrieDateField is probably just because a completely
 untouched metadta string (coming from Tika) is not a Date (or a correctly
 ISO formatted string) ... leaving the big question being: what changed in
 the JDK such that that tika metadta is no longer being parsed properly
 into a Date in the first place?
 
 (NOTE: I'm speculating in all of this based on what little i remember
 about Tika)
 
 
 : Date: Wed, 8 Jul 2015 23:35:52 +0200
 : From: Uwe Schindler u...@thetaphi.de
 : Reply-To: dev@lucene.apache.org
 : To: dev@lucene.apache.org
 : Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build
 #
 :  13198 - Still Failing!
 :
 : In fact this is really strange! The Exception is thrown because the
 DateMathParser does not find the Z inside the date string - it just
 complains that its not ISO8601... To me it looks like although it should only 
 get
 ISO8601 date, somehow extracting contenthandler sends a default
 formatted date to the importer, obviously the one from the input document
 - maybe TIKA exports it incorrectly...
 :
 : Unfortunately I have no idea how to explain this, the bug looks like being 
 in
 contrib/extraction. Importing standard documents with ISO8601 date to Solr
 works perfectly fine! I have to setup debugging in Eclipse with Java 9!
 :
 : Uwe
 :
 : -
 : Uwe Schindler
 : H.-H.-Meier-Allee 63, D-28213 Bremen
 : http://www.thetaphi.de
 : eMail: u...@thetaphi.de
 :
 :
 :  -Original Message-
 :  From: Uwe Schindler [mailto:u...@thetaphi.de]
 :  Sent: Wednesday, July 08, 2015 11:19 PM
 :  To: dev@lucene.apache.org
 :  Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
 Build #
 :  13198 - Still Failing!
 : 
 :  Hi Hoss,
 : 
 :  it does not fail with build 70, but suddenly fails with build 71.
 : 
 :  The major change in build 71 is (next to the fix for the nasty arraycopy
 bug),
 :  that the JDK now uses a completely different Locale database by default
 :  (CLDR):
 :  http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8008577
 :  http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/22f901cf304f
 : 
 :  This fails with any random seed. To me it looks like the Solr Trie Date 
 field
 :  parser seem to have problems with the new locale data. I have no idea
 how
 :  it parses the date (simpledateformat?) and if this is really Locale-
 :  Independent.
 : 
 :  Uwe
 : 
 :  -
 :  Uwe Schindler
 :  H.-H.-Meier-Allee 63, D-28213 Bremen
 :  http://www.thetaphi.de
 :  eMail: u...@thetaphi.de
 : 
 : 
 :   -Original Message-
 :   From: Uwe Schindler [mailto:u...@thetaphi.de]
 :   Sent: Wednesday, July 08, 2015 10:57 PM
 :   To: dev@lucene.apache.org
 :   Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
 Build
 :  #
 :   13198 - Still Failing!
 :  
 :   Hi Hoss,
 :  
 :   It is strange that only this single test fails. The date mentioned looks
 good. I
 :   have no idea if DIH uses JDK date parsing, maybe there is a new bug.
 :   The last JDK , Policeman used was b60, my last try with pervious b70 was
 :   failed horrible and did not even reach Solr (arraycopy bugs), so the bug
 may
 :   have introduced in b61 or later.
 :  
 :   Uwe
 :   -
 :   Uwe Schindler
 :   H.-H.-Meier-Allee 63, D-28213 Bremen
 :   http://www.thetaphi.de
 :   eMail: u...@thetaphi.de
 :  
 :  
 :-Original Message-
 :From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
 :Sent: Wednesday, July 08, 2015 7:42 PM
 :To: dev@lucene.apache.org
 :Cc: sh...@apache.org
 :

[jira] [Commented] (LUCENE-6616) IndexWriter should list files once on init, and IFD should not suppress FNFE

2015-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619202#comment-14619202
 ] 

ASF subversion and git services commented on LUCENE-6616:
-

Commit 1689942 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1689942 ]

LUCENE-6616: Lucene50SegmentInfoFormat should not claim to have created a file 
until the createOutput in fact succeeded

 IndexWriter should list files once on init, and IFD should not suppress FNFE
 

 Key: LUCENE-6616
 URL: https://issues.apache.org/jira/browse/LUCENE-6616
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.3, Trunk

 Attachments: LUCENE-6616.patch, LUCENE-6616.patch, LUCENE-6616.patch


 Some nice ideas [~rcmuir] had for cleaning up IW/IFD on init ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7734) MapReduce Indexer can error when using collection

2015-07-08 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-7734:

Attachment: SOLR-7734.patch

Updated patch to have a (help-suppressed) flag that allows old behaviour. I 
don't think this is a big issue since the contrib is documented as 
experimental, but I've added it regardless.

Can any committers take a look at this?

 MapReduce Indexer can error when using collection
 -

 Key: SOLR-7734
 URL: https://issues.apache.org/jira/browse/SOLR-7734
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
Reporter: Mike Drob
 Fix For: 5.3, Trunk

 Attachments: SOLR-7734.patch, SOLR-7734.patch, SOLR-7734.patch


 When running the MapReduceIndexerTool, it will usually pull a 
 {{solrconfig.xml}} from ZK for the collection that it is running against. 
 This can be problematic for several reasons:
 * Performance: The configuration in ZK will likely have several query 
 handlers, and lots of other components that don't make sense in an 
 indexing-only use of EmbeddedSolrServer (ESS).
 * Classpath Resources: If the Solr services are using some kind of additional 
 service (such as Sentry for auth) then the indexer will not have access to 
 the necessary configurations without the user jumping through several hoops.
 * Distinct Configuration Needs: Enabling Soft Commits on the ESS doesn't make 
 sense. There's other configurations that 
 * Update Chain Behaviours: I'm under the impression that UpdateChains may 
 behave differently in ESS than a SolrCloud cluster. Is it safe to depend on 
 consistent behaviour here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Uwe Schindler
Hi Hoss,

It is strange that only this single test fails. The date mentioned looks good. 
I have no idea if DIH uses JDK date parsing, maybe there is a new bug.
The last JDK , Policeman used was b60, my last try with pervious b70 was failed 
horrible and did not even reach Solr (arraycopy bugs), so the bug may have 
introduced in b61 or later.

Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
 Sent: Wednesday, July 08, 2015 7:42 PM
 To: dev@lucene.apache.org
 Cc: sh...@apache.org
 Subject: Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
 13198 - Still Failing!
 
 
 2 nearly identical failures from policeman immediatley after upgrading
 jdk1.9.0 to b71 ... seems like more then a coincidence.
 
 ant test  -Dtestcase=ExtractingRequestHandlerTest -
 Dtests.method=testArabicPDF -Dtests.seed=232D0A5404C2ADED -
 Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en_JM -
 Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true -Dtests.file.encoding=UTF-
 8
 
 ant test  -Dtestcase=ExtractingRequestHandlerTest -
 Dtests.method=testArabicPDF -Dtests.seed=F05E63CDB3511062 -
 Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=kk -
 Dtests.timezone=Asia/Hebron -Dtests.asserts=true -
 Dtests.file.encoding=UTF-8
 
 ...I haven't dug into the details of the test, but neither of those commands
 reproduce the failure for me on Java7 or Java8 -- can someone with a few
 Java9 builds try them out on both b70 (or earlier) and b71 to try and confirm 
 if
 this is definitely a new failure in b71 ?
 
 
 
 : Date: Wed, 8 Jul 2015 14:01:17 + (UTC)
 : From: Policeman Jenkins Server jenk...@thetaphi.de
 : Reply-To: dev@lucene.apache.org
 : To: sh...@apache.org, dev@lucene.apache.org
 : Subject: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
 : 13198 - Still Failing!
 :
 : Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13198/
 : Java: 64bit/jdk1.9.0-ea-b71 -XX:+UseCompressedOops -
 XX:+UseConcMarkSweepGC
 :
 : 4 tests failed.
 : FAILED:
 org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testArabic
 PDF
 :
 : Error Message:
 : Invalid Date String:'Tue Mar 09 08:44:49 EEST 2010'
 :
 : Stack Trace:
 : org.apache.solr.common.SolrException: Invalid Date String:'Tue Mar 09
 08:44:49 EEST 2010'
 : at
 __randomizedtesting.SeedInfo.seed([F05E63CDB3511062:9E9818C2B094BB3
 7]:0)
 : at
 org.apache.solr.schema.TrieDateField.parseMath(TrieDateField.java:150)
 : at org.apache.solr.schema.TrieField.createField(TrieField.java:657)
 : at org.apache.solr.schema.TrieField.createFields(TrieField.java:694)
 : at
 org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48
 )
 : at
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.jav
 a:123)
 : at
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpd
 ateCommand.java:83)
 : at
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandle
 r2.java:237)
 : at
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler
 2.java:163)
 : at
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUp
 dateProcessorFactory.java:69)
 : at
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(Up
 dateRequestProcessor.java:51)
 : at
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(
 DistributedUpdateProcessor.java:981)
 : at
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(
 DistributedUpdateProcessor.java:706)
 : at
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpd
 ateProcessorFactory.java:104)
 : at
 org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(Extrac
 tingDocumentLoader.java:122)
 : at
 org.apache.solr.handler.extraction.ExtractingDocumentLoader.addDoc(Extra
 ctingDocumentLoader.java:127)
 : at
 org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(Extractin
 gDocumentLoader.java:230)
 : at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(Co
 ntentStreamHandlerBase.java:74)
 : at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandl
 erBase.java:143)
 : at org.apache.solr.core.SolrCore.execute(SolrCore.java:2058)
 : at
 org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:339)
 : at
 org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocalF
 romHandler(ExtractingRequestHandlerTest.java:737)
 : at
 org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocal(
 ExtractingRequestHandlerTest.java:744)
 : at
 org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testArabic
 PDF(ExtractingRequestHandlerTest.java:526)
 : at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 : at
 

[jira] [Commented] (SOLR-6234) Scoring modes for query time join

2015-07-08 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619436#comment-14619436
 ] 

Timothy Potter commented on SOLR-6234:
--

[~rjosal] sounds cool ... definitely share the patch!

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 5.3
Reporter: Mikhail Khludnev
Assignee: Timothy Potter
  Labels: features, patch, test
 Fix For: 5.3

 Attachments: SOLR-6234.patch, SOLR-6234.patch


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil)
  - {{score=none}} is *default*, eg if you *omit* this localparam 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 - {{multiVals=true|false}} is introduced 
 - there is a test coverage for cross core join case. 
 - so far it joins string and multivalue string fields (Sorted, SortedSet, 
 Binary), but not Numerics DVs. follow-up LUCENE-5868  
 -there was a bug in cross core join, however there is a workaround for it- 
 it's fixed in Dec'14 patch.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6595) CharFilter offsets correction is wonky

2015-07-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619449#comment-14619449
 ] 

Robert Muir commented on LUCENE-6595:
-

I am lost in all the correct() methods now for charfilters. I think at most 
tokenizer should only have one such method.

 CharFilter offsets correction is wonky
 --

 Key: LUCENE-6595
 URL: https://issues.apache.org/jira/browse/LUCENE-6595
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: LUCENE-6595.patch, LUCENE-6595.patch, LUCENE-6595.patch


 Spinoff from this original Elasticsearch issue: 
 https://github.com/elastic/elasticsearch/issues/11726
 If I make a MappingCharFilter with these mappings:
 {noformat}
   ( - 
   ) - 
 {noformat}
 i.e., just erase left and right paren, then tokenizing the string
 (F31) with e.g. WhitespaceTokenizer, produces a single token F31,
 with start offset 1 (good).
 But for its end offset, I would expect/want 4, but it produces 5
 today.
 This can be easily explained given how the mapping works: each time a
 mapping rule matches, we update the cumulative offset difference,
 conceptually as an array like this (it's encoded more compactly):
 {noformat}
   Output offset: 0 1 2 3
Input offset: 1 2 3 5
 {noformat}
 When the tokenizer produces F31, it assigns it startOffset=0 and
 endOffset=3 based on the characters it sees (F, 3, 1).  It then asks
 the CharFilter to correct those offsets, mapping them backwards
 through the above arrays, which creates startOffset=1 (good) and
 endOffset=5 (bad).
 At first, to fix this, I thought this is an off-by-1 and when
 correcting the endOffset we really should return
 1+correct(outputEndOffset-1), which would return the correct value (4)
 here.
 But that's too naive, e.g. here's another example:
 {noformat}
    - cc
 {noformat}
 If I then tokenize , today we produce the correct offsets (0, 4)
 but if we do this off-by-1 fix for endOffset, we would get the wrong
 endOffset (2).
 I'm not sure what to do here...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7172) addreplica API fails with incorrect error msg cannot create collection

2015-07-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619473#comment-14619473
 ] 

Erick Erickson commented on SOLR-7172:
--

[~shalinmangar] I happen to be in this code for another JIRA, if I find this 
should I just fix it or are you already working on it?

 addreplica API fails with incorrect error msg cannot create collection
 

 Key: SOLR-7172
 URL: https://issues.apache.org/jira/browse/SOLR-7172
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3, 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 5.2, Trunk


 Steps to reproduce:
 # Create 1 node solr cloud cluster
 # Create collection 'test' with 
 numShards=1replicationFactor=1maxShardsPerNode=1
 # Call addreplica API:
 {code}
 http://localhost:8983/solr/admin/collections?action=addreplicacollection=testshard=shard1wt=json
  
 {code}
 API fails with the following response:
 {code}
 {
 responseHeader: {
 status: 400,
 QTime: 9
 },
 Operation ADDREPLICA caused exception:: 
 org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
 Cannot create collection test. No live Solr-instances,
 exception: {
 msg: Cannot create collection test. No live Solr-instances,
 rspCode: 400
 },
 error: {
 msg: Cannot create collection test. No live Solr-instances,
 code: 400
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6234) Scoring modes for query time join

2015-07-08 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6234:


Assignee: Timothy Potter

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 5.3
Reporter: Mikhail Khludnev
Assignee: Timothy Potter
  Labels: features, patch, test
 Fix For: 5.3

 Attachments: SOLR-6234.patch, SOLR-6234.patch


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil)
  - {{score=none}} is *default*, eg if you *omit* this localparam 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 - {{multiVals=true|false}} is introduced 
 - there is a test coverage for cross core join case. 
 - so far it joins string and multivalue string fields (Sorted, SortedSet, 
 Binary), but not Numerics DVs. follow-up LUCENE-5868  
 -there was a bug in cross core join, however there is a workaround for it- 
 it's fixed in Dec'14 patch.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13202 - Still Failing!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13202/
Java: 64bit/jdk1.9.0-ea-b71 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testPasswordProtected

Error Message:
Invalid Date String:'Fri Jun 22 16:57:58 AST 2012'

Stack Trace:
org.apache.solr.common.SolrException: Invalid Date String:'Fri Jun 22 16:57:58 
AST 2012'
at 
__randomizedtesting.SeedInfo.seed([32A951671E8CC848:340CD59CE0B59892]:0)
at 
org.apache.solr.schema.TrieDateField.parseMath(TrieDateField.java:150)
at org.apache.solr.schema.TrieField.createField(TrieField.java:657)
at org.apache.solr.schema.TrieField.createFields(TrieField.java:694)
at 
org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:981)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(ExtractingDocumentLoader.java:122)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.addDoc(ExtractingDocumentLoader.java:127)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:230)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2058)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:339)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocalFromHandler(ExtractingRequestHandlerTest.java:737)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.loadLocal(ExtractingRequestHandlerTest.java:744)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testPasswordProtected(ExtractingRequestHandlerTest.java:662)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 733 - Still Failing

2015-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/733/

3 tests failed.
REGRESSION:  org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest.test

Error Message:
The Monkey ran for over 30 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 30 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([9CD33C16F091D81F:148703CC5E6DB5E7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:537)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:143)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6595) CharFilter offsets correction is wonky

2015-07-08 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618122#comment-14618122
 ] 

Cao Manh Dat commented on LUCENE-6595:
--

[~mikemccand] Sorry for the late, I will submit a patch tonight (6 hours later)

 CharFilter offsets correction is wonky
 --

 Key: LUCENE-6595
 URL: https://issues.apache.org/jira/browse/LUCENE-6595
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: LUCENE-6595.patch, LUCENE-6595.patch


 Spinoff from this original Elasticsearch issue: 
 https://github.com/elastic/elasticsearch/issues/11726
 If I make a MappingCharFilter with these mappings:
 {noformat}
   ( - 
   ) - 
 {noformat}
 i.e., just erase left and right paren, then tokenizing the string
 (F31) with e.g. WhitespaceTokenizer, produces a single token F31,
 with start offset 1 (good).
 But for its end offset, I would expect/want 4, but it produces 5
 today.
 This can be easily explained given how the mapping works: each time a
 mapping rule matches, we update the cumulative offset difference,
 conceptually as an array like this (it's encoded more compactly):
 {noformat}
   Output offset: 0 1 2 3
Input offset: 1 2 3 5
 {noformat}
 When the tokenizer produces F31, it assigns it startOffset=0 and
 endOffset=3 based on the characters it sees (F, 3, 1).  It then asks
 the CharFilter to correct those offsets, mapping them backwards
 through the above arrays, which creates startOffset=1 (good) and
 endOffset=5 (bad).
 At first, to fix this, I thought this is an off-by-1 and when
 correcting the endOffset we really should return
 1+correct(outputEndOffset-1), which would return the correct value (4)
 here.
 But that's too naive, e.g. here's another example:
 {noformat}
    - cc
 {noformat}
 If I then tokenize , today we produce the correct offsets (0, 4)
 but if we do this off-by-1 fix for endOffset, we would get the wrong
 endOffset (2).
 I'm not sure what to do here...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6595) CharFilter offsets correction is wonky

2015-07-08 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619814#comment-14619814
 ] 

Cao Manh Dat commented on LUCENE-6595:
--

Thanks [~mikemccand]!
{quote}
@@ -215,7 +230,8 @@
 };
 
 int numRounds = RANDOM_MULTIPLIER * 1;
-checkRandomData(random(), analyzer, numRounds);
+//checkRandomData(random(), analyzer, numRounds);
+checkAnalysisConsistency(random(),analyzer,true,m?(y ');
 analyzer.close();
   }
{quote}
My fault, I played around with the test and forgot to roll back. 

{quote}
It's spooky the test fails because with the right default here (hmm maybe it 
should be {code} off + cumulativeDiff {code} since it's an input offset, it 
should behave exactly has before?
{quote}
Nice idea, I changed it will {code} off - cumulativeDiff {code} and i work 
perfectly

{quote}
For the default impl for CharFilter.correctEnd should we just use 
CharFilter.correct?
Can we rename correctOffset -- correctStartOffset now that we also have a 
correctEndOffset?
{quote}
Nice refactoring.

{quote}
Does (correctOffset(endOffset-1)+1) not work? It would be nice not to add the 
new method to CharFilter (only to Tokenizer).
{quote}
I tried to do that, but it cant be. Because the information for the special 
case lie down in BaseCharFilter.

[~rcmuir] I will try to explain the solution in a slide, I'm quite not good at 
it :( 


 CharFilter offsets correction is wonky
 --

 Key: LUCENE-6595
 URL: https://issues.apache.org/jira/browse/LUCENE-6595
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: LUCENE-6595.patch, LUCENE-6595.patch, LUCENE-6595.patch


 Spinoff from this original Elasticsearch issue: 
 https://github.com/elastic/elasticsearch/issues/11726
 If I make a MappingCharFilter with these mappings:
 {noformat}
   ( - 
   ) - 
 {noformat}
 i.e., just erase left and right paren, then tokenizing the string
 (F31) with e.g. WhitespaceTokenizer, produces a single token F31,
 with start offset 1 (good).
 But for its end offset, I would expect/want 4, but it produces 5
 today.
 This can be easily explained given how the mapping works: each time a
 mapping rule matches, we update the cumulative offset difference,
 conceptually as an array like this (it's encoded more compactly):
 {noformat}
   Output offset: 0 1 2 3
Input offset: 1 2 3 5
 {noformat}
 When the tokenizer produces F31, it assigns it startOffset=0 and
 endOffset=3 based on the characters it sees (F, 3, 1).  It then asks
 the CharFilter to correct those offsets, mapping them backwards
 through the above arrays, which creates startOffset=1 (good) and
 endOffset=5 (bad).
 At first, to fix this, I thought this is an off-by-1 and when
 correcting the endOffset we really should return
 1+correct(outputEndOffset-1), which would return the correct value (4)
 here.
 But that's too naive, e.g. here's another example:
 {noformat}
    - cc
 {noformat}
 If I then tokenize , today we produce the correct offsets (0, 4)
 but if we do this off-by-1 fix for endOffset, we would get the wrong
 endOffset (2).
 I'm not sure what to do here...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6234) Scoring modes for query time join

2015-07-08 Thread Ryan Josal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619521#comment-14619521
 ] 

Ryan Josal edited comment on SOLR-6234 at 7/8/15 10:47 PM:
---

I've attached otherHandler.patch, which is a patch on top of the existing 
ScoreJoinQParserPlugin.java from the Dec 14 patch, which shows changes to add a 
feature where the user can supply handler=/select (for example) in localparams, 
to apply the config from the handler in the other core.  You'll notice that the 
request is built from localParams instead of params so that whatever your 
current core params are don't override the other core configured params.  If 
this breaks something, it could be changed to only do that when the handler 
localParam is present.  This patch is quick and dirty because it uses 
reflection to get the defaults,appends,invariants config from the 
BaseRequestHandler class.  Really, the access level of those variables should 
change, or the method should be moved there.

Personally I'm using this feature to join my deals core with my products core 
as the othercore, and I wanted it to do a regular search against the products 
core.  This approach could actually be used in the \!join qparser too.  If this 
patch is useful for somebody, great!


was (Author: rjosal):
I've attached otherHandler.patch, which is a patch on top of the existing 
ScoreJoinQParserPlugin.java from the Dec 14 patch, which shows changes to add a 
feature where the user can supply handler=/select (for example) in localparams, 
to apply the config from the handler in the other core.  You'll notice that the 
request is built from localParams instead of params so that whatever your 
current core params are don't override the other core configured params.  If 
this breaks something, it could be changed to only do that when the handler 
localParam is present.  This patch is quick and dirty because it uses 
reflection to get the defaults,appends,invariants config from the 
BaseRequestHandler class.  Really, the access level of those variables should 
change, or the method should be moved there.

Personally I'm using this feature to join my deals core with my products core 
as the othercore, and I wanted it to do a regular search against the products 
core.  This approach could actually be used in the !join qparser too.  If this 
patch is useful for somebody, great!

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 5.3
Reporter: Mikhail Khludnev
Assignee: Timothy Potter
  Labels: features, patch, test
 Fix For: 5.3

 Attachments: SOLR-6234.patch, SOLR-6234.patch, otherHandler.patch


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil)
  - {{score=none}} is *default*, eg if you *omit* this localparam 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 - {{multiVals=true|false}} is introduced 
 - there is a test coverage for cross core join case. 
 - so far it joins string and multivalue string fields (Sorted, SortedSet, 
 Binary), but not Numerics DVs. follow-up LUCENE-5868  
 -there was a bug in cross core join, however there is a workaround for it- 
 it's fixed in Dec'14 patch.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 172 - Still Failing

2015-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/172/

1 tests failed.
REGRESSION:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
responseHeader:{ status:0, QTime:0},   params:{wt:json},   
context:{ webapp:, path:/test1, httpMethod:GET},   
class:org.apache.solr.core.BlobStoreTestRequestHandler,   x:X val}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  responseHeader:{
status:0,
QTime:0},
  params:{wt:json},
  context:{
webapp:,
path:/test1,
httpMethod:GET},
  class:org.apache.solr.core.BlobStoreTestRequestHandler,
  x:X val}
at 
__randomizedtesting.SeedInfo.seed([DEE0A2015004190F:6AD8F56A7D9BCAF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:410)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:260)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Comment Edited] (LUCENE-6595) CharFilter offsets correction is wonky

2015-07-08 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619831#comment-14619831
 ] 

Cao Manh Dat edited comment on LUCENE-6595 at 7/9/15 3:39 AM:
--

Attached the slide.
[~rcmuir] I think the solution quite clear now :)


was (Author: caomanhdat):
[~rcmuir] I think it quite clear now :)

 CharFilter offsets correction is wonky
 --

 Key: LUCENE-6595
 URL: https://issues.apache.org/jira/browse/LUCENE-6595
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: LUCENE-6595.patch, LUCENE-6595.patch, LUCENE-6595.patch, 
 Lucene-6595.pptx


 Spinoff from this original Elasticsearch issue: 
 https://github.com/elastic/elasticsearch/issues/11726
 If I make a MappingCharFilter with these mappings:
 {noformat}
   ( - 
   ) - 
 {noformat}
 i.e., just erase left and right paren, then tokenizing the string
 (F31) with e.g. WhitespaceTokenizer, produces a single token F31,
 with start offset 1 (good).
 But for its end offset, I would expect/want 4, but it produces 5
 today.
 This can be easily explained given how the mapping works: each time a
 mapping rule matches, we update the cumulative offset difference,
 conceptually as an array like this (it's encoded more compactly):
 {noformat}
   Output offset: 0 1 2 3
Input offset: 1 2 3 5
 {noformat}
 When the tokenizer produces F31, it assigns it startOffset=0 and
 endOffset=3 based on the characters it sees (F, 3, 1).  It then asks
 the CharFilter to correct those offsets, mapping them backwards
 through the above arrays, which creates startOffset=1 (good) and
 endOffset=5 (bad).
 At first, to fix this, I thought this is an off-by-1 and when
 correcting the endOffset we really should return
 1+correct(outputEndOffset-1), which would return the correct value (4)
 here.
 But that's too naive, e.g. here's another example:
 {noformat}
    - cc
 {noformat}
 If I then tokenize , today we produce the correct offsets (0, 4)
 but if we do this off-by-1 fix for endOffset, we would get the wrong
 endOffset (2).
 I'm not sure what to do here...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7172) addreplica API fails with incorrect error msg cannot create collection

2015-07-08 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619830#comment-14619830
 ] 

Shalin Shekhar Mangar commented on SOLR-7172:
-

Please go ahead!

 addreplica API fails with incorrect error msg cannot create collection
 

 Key: SOLR-7172
 URL: https://issues.apache.org/jira/browse/SOLR-7172
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3, 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 5.2, Trunk


 Steps to reproduce:
 # Create 1 node solr cloud cluster
 # Create collection 'test' with 
 numShards=1replicationFactor=1maxShardsPerNode=1
 # Call addreplica API:
 {code}
 http://localhost:8983/solr/admin/collections?action=addreplicacollection=testshard=shard1wt=json
  
 {code}
 API fails with the following response:
 {code}
 {
 responseHeader: {
 status: 400,
 QTime: 9
 },
 Operation ADDREPLICA caused exception:: 
 org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
 Cannot create collection test. No live Solr-instances,
 exception: {
 msg: Cannot create collection test. No live Solr-instances,
 rspCode: 400
 },
 error: {
 msg: Cannot create collection test. No live Solr-instances,
 code: 400
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6595) CharFilter offsets correction is wonky

2015-07-08 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated LUCENE-6595:
-
Attachment: Lucene-6595.pptx

[~rcmuir] I think it quite clear now :)

 CharFilter offsets correction is wonky
 --

 Key: LUCENE-6595
 URL: https://issues.apache.org/jira/browse/LUCENE-6595
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Attachments: LUCENE-6595.patch, LUCENE-6595.patch, LUCENE-6595.patch, 
 Lucene-6595.pptx


 Spinoff from this original Elasticsearch issue: 
 https://github.com/elastic/elasticsearch/issues/11726
 If I make a MappingCharFilter with these mappings:
 {noformat}
   ( - 
   ) - 
 {noformat}
 i.e., just erase left and right paren, then tokenizing the string
 (F31) with e.g. WhitespaceTokenizer, produces a single token F31,
 with start offset 1 (good).
 But for its end offset, I would expect/want 4, but it produces 5
 today.
 This can be easily explained given how the mapping works: each time a
 mapping rule matches, we update the cumulative offset difference,
 conceptually as an array like this (it's encoded more compactly):
 {noformat}
   Output offset: 0 1 2 3
Input offset: 1 2 3 5
 {noformat}
 When the tokenizer produces F31, it assigns it startOffset=0 and
 endOffset=3 based on the characters it sees (F, 3, 1).  It then asks
 the CharFilter to correct those offsets, mapping them backwards
 through the above arrays, which creates startOffset=1 (good) and
 endOffset=5 (bad).
 At first, to fix this, I thought this is an off-by-1 and when
 correcting the endOffset we really should return
 1+correct(outputEndOffset-1), which would return the correct value (4)
 here.
 But that's too naive, e.g. here's another example:
 {noformat}
    - cc
 {noformat}
 If I then tokenize , today we produce the correct offsets (0, 4)
 but if we do this off-by-1 fix for endOffset, we would get the wrong
 endOffset (2).
 I'm not sure what to do here...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b71) - Build # 13383 - Failure!

2015-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13383/
Java: 32bit/jdk1.9.0-ea-b71 -server -XX:+UseParallelGC 
-Djava.locale.providers=JRE,SPI

83 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.payloads.TestPayloadTermQuery

Error Message:
MockDirectoryWrapper: cannot close: there are still open files: 
{_4_BlockTreeOrds_0.pos=1, _1.tvd=1, _1.nvd=1, _2.cfs=1, 
_1_BlockTreeOrds_0.pos=1, _0.nvd=1, _0_BlockTreeOrds_0.doc=1, 
_1_BlockTreeOrds_0.tio=1, _4_BlockTreeOrds_0.pay=1, _1_BlockTreeOrds_0.pay=1, 
_4_BlockTreeOrds_0.tio=1, _1.fdt=1, _0_BlockTreeOrds_0.pos=1, 
_4_BlockTreeOrds_0.doc=1, _4.tvd=1, _0_BlockTreeOrds_0.tio=1, 
_1_BlockTreeOrds_0.doc=1, _4.nvd=1, _0.fdt=1, _0.tvd=1, _4.fdt=1, _3.cfs=1, 
_0_BlockTreeOrds_0.pay=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
open files: {_4_BlockTreeOrds_0.pos=1, _1.tvd=1, _1.nvd=1, _2.cfs=1, 
_1_BlockTreeOrds_0.pos=1, _0.nvd=1, _0_BlockTreeOrds_0.doc=1, 
_1_BlockTreeOrds_0.tio=1, _4_BlockTreeOrds_0.pay=1, _1_BlockTreeOrds_0.pay=1, 
_4_BlockTreeOrds_0.tio=1, _1.fdt=1, _0_BlockTreeOrds_0.pos=1, 
_4_BlockTreeOrds_0.doc=1, _4.tvd=1, _0_BlockTreeOrds_0.tio=1, 
_1_BlockTreeOrds_0.doc=1, _4.nvd=1, _0.fdt=1, _0.tvd=1, _4.fdt=1, _3.cfs=1, 
_0_BlockTreeOrds_0.pay=1}
at __randomizedtesting.SeedInfo.seed([807EB1F5AD452A17]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:735)
at 
org.apache.lucene.search.payloads.TestPayloadTermQuery.afterClass(TestPayloadTermQuery.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: unclosed IndexInput: _3.cfs
at 
org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:623)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:667)
at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.init(Lucene50CompoundReader.java:71)
at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71)
at 
org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:93)
at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65)
at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58)
at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:671)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:73)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at 
org.apache.lucene.search.payloads.TestPayloadTermQuery.testIgnoreSpanScorer(TestPayloadTermQuery.java:236)
at 

[jira] [Commented] (SOLR-4907) Discuss and create instructions for taking Solr from the example to robust multi-server production

2015-07-08 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619695#comment-14619695
 ] 

Steve Davids commented on SOLR-4907:


I created a Solr RPM (yum) and DEB (apt-get) package builder here: 
https://github.com/sdavids13/solr-os-packager. It would be great if those 
packages can be built and pushed out with new Solr releases to make life a bit 
easier for clients to install and update to newer versions of Solr. The real 
meat is happening in the Gradle build file which uses Netflix's 
gradle-os-package plugin: 
https://github.com/sdavids13/solr-os-packager/blob/master/build.gradle.

 Discuss and create instructions for taking Solr from the example to robust 
 multi-server production
 --

 Key: SOLR-4907
 URL: https://issues.apache.org/jira/browse/SOLR-4907
 Project: Solr
  Issue Type: Improvement
Reporter: Shawn Heisey
 Attachments: SOLR-4907-install.sh


 There are no good step-by-step instructions for taking the Solr example and 
 producing a robust production setup on multiple servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Uwe Schindler
The test passes on JDK 9 b71 with:
-Dargs=-Djava.locale.providers=JRE,SPI

This reenabled the old Locale data. I will add this to the build parameters of 
policeman Jenkins to stop this from failing. To me it looks like the locale 
data somehow is not able to correctly parse weekdays and/or timezones. I will 
check this out tomorrow and report a bug to the OpenJDK people. There is 
something fishy with CLDR locale data. There are already some bugs open, so 
work is not yet finished (e.g. sometimes it uses wrong timezone shortcuts,...)

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Thursday, July 09, 2015 12:16 AM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
 13198 - Still Failing!
 
 Hi,
 
 It is exactly like that. The contenthandler tries to parse all metadata 
 strings as
 date using DateUtil.parseDate() and if that fails, it leaves them unchanged. 
 In
 JDK 8 it succeeds to parse the date and then converts it to ISO-8601 Solr
 format. With JDK 9 b71 it passes the plain string in the TIKA format (it uses
 HTTP Cookie-like dates in TIKA), which then fails in TrieDateField.
 
 Unfortunately, I was not able to setup Eclipse with JDK 9, so I cannot debug
 this. I just did this is JDK 8 to verify how it works...
 
 We should add a testcase for DateUtil and then try to find out how this fails 
 in
 JDK 9. It looks like this class is completely untested with strange date
 formats.
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
  Sent: Wednesday, July 08, 2015 11:46 PM
  To: dev@lucene.apache.org
  Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build
 #
  13198 - Still Failing!
 
 
  My guess (haven't verified) is that if you dig into the
  ExtractionRequestHandler code, you will probably find it parsing Tika
  Metadata Strings (All Tika metadata are simple Strings, correct?) into
  Date objects if/when it can/should based on the field type -- and when it
  can't it propogates the metadata string as is (so that, for example, a
  subsequent update processor can parse it)
 
  The error from TrieDateField is probably just because a completely
  untouched metadta string (coming from Tika) is not a Date (or a correctly
  ISO formatted string) ... leaving the big question being: what changed in
  the JDK such that that tika metadta is no longer being parsed properly
  into a Date in the first place?
 
  (NOTE: I'm speculating in all of this based on what little i remember
  about Tika)
 
 
  : Date: Wed, 8 Jul 2015 23:35:52 +0200
  : From: Uwe Schindler u...@thetaphi.de
  : Reply-To: dev@lucene.apache.org
  : To: dev@lucene.apache.org
  : Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
 Build
  #
  :  13198 - Still Failing!
  :
  : In fact this is really strange! The Exception is thrown because the
  DateMathParser does not find the Z inside the date string - it just
  complains that its not ISO8601... To me it looks like although it should 
  only
 get
  ISO8601 date, somehow extracting contenthandler sends a default
  formatted date to the importer, obviously the one from the input
 document
  - maybe TIKA exports it incorrectly...
  :
  : Unfortunately I have no idea how to explain this, the bug looks like being
 in
  contrib/extraction. Importing standard documents with ISO8601 date to
 Solr
  works perfectly fine! I have to setup debugging in Eclipse with Java 9!
  :
  : Uwe
  :
  : -
  : Uwe Schindler
  : H.-H.-Meier-Allee 63, D-28213 Bremen
  : http://www.thetaphi.de
  : eMail: u...@thetaphi.de
  :
  :
  :  -Original Message-
  :  From: Uwe Schindler [mailto:u...@thetaphi.de]
  :  Sent: Wednesday, July 08, 2015 11:19 PM
  :  To: dev@lucene.apache.org
  :  Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
  Build #
  :  13198 - Still Failing!
  : 
  :  Hi Hoss,
  : 
  :  it does not fail with build 70, but suddenly fails with build 71.
  : 
  :  The major change in build 71 is (next to the fix for the nasty arraycopy
  bug),
  :  that the JDK now uses a completely different Locale database by default
  :  (CLDR):
  :  http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8008577
  :  http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/22f901cf304f
  : 
  :  This fails with any random seed. To me it looks like the Solr Trie Date
 field
  :  parser seem to have problems with the new locale data. I have no idea
  how
  :  it parses the date (simpledateformat?) and if this is really Locale-
  :  Independent.
  : 
  :  Uwe
  : 
  :  -
  :  Uwe Schindler
  :  H.-H.-Meier-Allee 63, D-28213 Bremen
  :  http://www.thetaphi.de
  :  eMail: u...@thetaphi.de
  : 
  : 
  :   

RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Uwe Schindler
Could be something like this bug: 
https://bugs.openjdk.java.net/browse/JDK-8129881

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Thursday, July 09, 2015 12:40 AM
 To: dev@lucene.apache.org
 Cc: rory.odonn...@oracle.com
 Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
 13198 - Still Failing!
 
 The test passes on JDK 9 b71 with:
 -Dargs=-Djava.locale.providers=JRE,SPI
 
 This reenabled the old Locale data. I will add this to the build parameters of
 policeman Jenkins to stop this from failing. To me it looks like the locale 
 data
 somehow is not able to correctly parse weekdays and/or timezones. I will
 check this out tomorrow and report a bug to the OpenJDK people. There is
 something fishy with CLDR locale data. There are already some bugs open, so
 work is not yet finished (e.g. sometimes it uses wrong timezone shortcuts,...)
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Thursday, July 09, 2015 12:16 AM
  To: dev@lucene.apache.org
  Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build
 #
  13198 - Still Failing!
 
  Hi,
 
  It is exactly like that. The contenthandler tries to parse all metadata 
  strings
 as
  date using DateUtil.parseDate() and if that fails, it leaves them unchanged.
 In
  JDK 8 it succeeds to parse the date and then converts it to ISO-8601 Solr
  format. With JDK 9 b71 it passes the plain string in the TIKA format (it 
  uses
  HTTP Cookie-like dates in TIKA), which then fails in TrieDateField.
 
  Unfortunately, I was not able to setup Eclipse with JDK 9, so I cannot debug
  this. I just did this is JDK 8 to verify how it works...
 
  We should add a testcase for DateUtil and then try to find out how this 
  fails
 in
  JDK 9. It looks like this class is completely untested with strange date
  formats.
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
   -Original Message-
   From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
   Sent: Wednesday, July 08, 2015 11:46 PM
   To: dev@lucene.apache.org
   Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
 Build
  #
   13198 - Still Failing!
  
  
   My guess (haven't verified) is that if you dig into the
   ExtractionRequestHandler code, you will probably find it parsing Tika
   Metadata Strings (All Tika metadata are simple Strings, correct?) into
   Date objects if/when it can/should based on the field type -- and when it
   can't it propogates the metadata string as is (so that, for example, a
   subsequent update processor can parse it)
  
   The error from TrieDateField is probably just because a completely
   untouched metadta string (coming from Tika) is not a Date (or a correctly
   ISO formatted string) ... leaving the big question being: what changed in
   the JDK such that that tika metadta is no longer being parsed properly
   into a Date in the first place?
  
   (NOTE: I'm speculating in all of this based on what little i remember
   about Tika)
  
  
   : Date: Wed, 8 Jul 2015 23:35:52 +0200
   : From: Uwe Schindler u...@thetaphi.de
   : Reply-To: dev@lucene.apache.org
   : To: dev@lucene.apache.org
   : Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
  Build
   #
   :  13198 - Still Failing!
   :
   : In fact this is really strange! The Exception is thrown because the
   DateMathParser does not find the Z inside the date string - it just
   complains that its not ISO8601... To me it looks like although it should 
   only
  get
   ISO8601 date, somehow extracting contenthandler sends a default
   formatted date to the importer, obviously the one from the input
  document
   - maybe TIKA exports it incorrectly...
   :
   : Unfortunately I have no idea how to explain this, the bug looks like
 being
  in
   contrib/extraction. Importing standard documents with ISO8601 date to
  Solr
   works perfectly fine! I have to setup debugging in Eclipse with Java 9!
   :
   : Uwe
   :
   : -
   : Uwe Schindler
   : H.-H.-Meier-Allee 63, D-28213 Bremen
   : http://www.thetaphi.de
   : eMail: u...@thetaphi.de
   :
   :
   :  -Original Message-
   :  From: Uwe Schindler [mailto:u...@thetaphi.de]
   :  Sent: Wednesday, July 08, 2015 11:19 PM
   :  To: dev@lucene.apache.org
   :  Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
   Build #
   :  13198 - Still Failing!
   : 
   :  Hi Hoss,
   : 
   :  it does not fail with build 70, but suddenly fails with build 71.
   : 
   :  The major change in build 71 is (next to the fix for the nasty 
   arraycopy
   bug),
   :  that the JDK now uses a completely different Locale 

[jira] [Updated] (SOLR-6234) Scoring modes for query time join

2015-07-08 Thread Ryan Josal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Josal updated SOLR-6234:
-
Attachment: otherHandler.patch

I've attached otherHandler.patch, which is a patch on top of the existing 
ScoreJoinQParserPlugin.java from the Dec 14 patch, which shows changes to add a 
feature where the user can supply handler=/select (for example) in localparams, 
to apply the config from the handler in the other core.  You'll notice that the 
request is built from localParams instead of params so that whatever your 
current core params are don't override the other core configured params.  If 
this breaks something, it could be changed to only do that when the handler 
localParam is present.  This patch is quick and dirty because it uses 
reflection to get the defaults,appends,invariants config from the 
BaseRequestHandler class.  Really, the access level of those variables should 
change, or the method should be moved there.

Personally I'm using this feature to join my deals core with my products core 
as the othercore, and I wanted it to do a regular search against the products 
core.  This approach could actually be used in the !join qparser too.  If this 
patch is useful for somebody, great!

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 5.3
Reporter: Mikhail Khludnev
Assignee: Timothy Potter
  Labels: features, patch, test
 Fix For: 5.3

 Attachments: SOLR-6234.patch, SOLR-6234.patch, otherHandler.patch


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil)
  - {{score=none}} is *default*, eg if you *omit* this localparam 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 - {{multiVals=true|false}} is introduced 
 - there is a test coverage for cross core join case. 
 - so far it joins string and multivalue string fields (Sorted, SortedSet, 
 Binary), but not Numerics DVs. follow-up LUCENE-5868  
 -there was a bug in cross core join, however there is a workaround for it- 
 it's fixed in Dec'14 patch.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ADDREPLICA and maxShardsPerNode and specifying a node

2015-07-08 Thread Shai Erera
I think that if addreplica is called, we should assume the user understands
what he's doing, and we can safely ignore maxShardsPerNode.

+1 in renaming it. I don't think it's late to do anything. First, it
probably is an advanced parameter to be specified, if not expert. Second,
we can always deprecate it and add maxReplicasPerNode, support both and
remove the former in 2 minor releases  or the next major one.

IMO it's more important to have good API with sensible parameter names,
than leave it as that because of back-compat concerns.

Shai
On Jul 9, 2015 8:33 AM, Erick Erickson erickerick...@gmail.com wrote:

 I noticed today that if I create a collection with maxShardsPerNode, I
 can freely exceed that number by specifying the node parameter on an
 ADDREPLICA command.

 Is this intended behavior, or should I raise a JIRA? I can argue that
 if a user is really going to specify a node, then we should do
 whatever they say even if it exceeds maxShardsPerNode. If this is
 intended though I'll want to explicitly call that out in ref guide.

 Erick

 P.S. I'd _really_ like to rename maxShardsPerNode to
 maxReplicasPerNode, but it's too late for that.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-6234) Scoring modes for query time join

2015-07-08 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619250#comment-14619250
 ] 

Timothy Potter commented on SOLR-6234:
--

I'm working to get this patch committed to trunk / 5x

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 5.3
Reporter: Mikhail Khludnev
Assignee: Timothy Potter
  Labels: features, patch, test
 Fix For: 5.3

 Attachments: SOLR-6234.patch, SOLR-6234.patch


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil)
  - {{score=none}} is *default*, eg if you *omit* this localparam 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 - {{multiVals=true|false}} is introduced 
 - there is a test coverage for cross core join case. 
 - so far it joins string and multivalue string fields (Sorted, SortedSet, 
 Binary), but not Numerics DVs. follow-up LUCENE-5868  
 -there was a bug in cross core join, however there is a workaround for it- 
 it's fixed in Dec'14 patch.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6234) Scoring modes for query time join

2015-07-08 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619373#comment-14619373
 ] 

Mikhail Khludnev commented on SOLR-6234:


Tell me if I need to improve something! Thanks!

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 5.3
Reporter: Mikhail Khludnev
Assignee: Timothy Potter
  Labels: features, patch, test
 Fix For: 5.3

 Attachments: SOLR-6234.patch, SOLR-6234.patch


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil)
  - {{score=none}} is *default*, eg if you *omit* this localparam 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 - {{multiVals=true|false}} is introduced 
 - there is a test coverage for cross core join case. 
 - so far it joins string and multivalue string fields (Sorted, SortedSet, 
 Binary), but not Numerics DVs. follow-up LUCENE-5868  
 -there was a bug in cross core join, however there is a workaround for it- 
 it's fixed in Dec'14 patch.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build # 13198 - Still Failing!

2015-07-08 Thread Uwe Schindler
In fact this is really strange! The Exception is thrown because the 
DateMathParser does not find the Z inside the date string - it just complains 
that its not ISO8601... To me it looks like although it should only get ISO8601 
date, somehow extracting contenthandler sends a default formatted date to the 
importer, obviously the one from the input document - maybe TIKA exports it 
incorrectly...

Unfortunately I have no idea how to explain this, the bug looks like being in 
contrib/extraction. Importing standard documents with ISO8601 date to Solr 
works perfectly fine! I have to setup debugging in Eclipse with Java 9!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Wednesday, July 08, 2015 11:19 PM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build #
 13198 - Still Failing!
 
 Hi Hoss,
 
 it does not fail with build 70, but suddenly fails with build 71.
 
 The major change in build 71 is (next to the fix for the nasty arraycopy bug),
 that the JDK now uses a completely different Locale database by default
 (CLDR):
 http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8008577
 http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/22f901cf304f
 
 This fails with any random seed. To me it looks like the Solr Trie Date field
 parser seem to have problems with the new locale data. I have no idea how
 it parses the date (simpledateformat?) and if this is really Locale-
 Independent.
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Wednesday, July 08, 2015 10:57 PM
  To: dev@lucene.apache.org
  Subject: RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build
 #
  13198 - Still Failing!
 
  Hi Hoss,
 
  It is strange that only this single test fails. The date mentioned looks 
  good. I
  have no idea if DIH uses JDK date parsing, maybe there is a new bug.
  The last JDK , Policeman used was b60, my last try with pervious b70 was
  failed horrible and did not even reach Solr (arraycopy bugs), so the bug may
  have introduced in b61 or later.
 
  Uwe
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
   -Original Message-
   From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
   Sent: Wednesday, July 08, 2015 7:42 PM
   To: dev@lucene.apache.org
   Cc: sh...@apache.org
   Subject: Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) -
   Build #
   13198 - Still Failing!
  
  
   2 nearly identical failures from policeman immediatley after upgrading
   jdk1.9.0 to b71 ... seems like more then a coincidence.
  
   ant test  -Dtestcase=ExtractingRequestHandlerTest -
   Dtests.method=testArabicPDF -Dtests.seed=232D0A5404C2ADED -
   Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en_JM -
   Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true
   -Dtests.file.encoding=UTF-
   8
  
   ant test  -Dtestcase=ExtractingRequestHandlerTest -
   Dtests.method=testArabicPDF -Dtests.seed=F05E63CDB3511062 -
   Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=kk -
   Dtests.timezone=Asia/Hebron -Dtests.asserts=true -
   Dtests.file.encoding=UTF-8
  
   ...I haven't dug into the details of the test, but neither of those
   commands reproduce the failure for me on Java7 or Java8 -- can
 someone
   with a few
   Java9 builds try them out on both b70 (or earlier) and b71 to try and
   confirm if this is definitely a new failure in b71 ?
  
  
  
   : Date: Wed, 8 Jul 2015 14:01:17 + (UTC)
   : From: Policeman Jenkins Server jenk...@thetaphi.de
   : Reply-To: dev@lucene.apache.org
   : To: sh...@apache.org, dev@lucene.apache.org
   : Subject: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b71) - Build
 #
   : 13198 - Still Failing!
   :
   : Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13198/
   : Java: 64bit/jdk1.9.0-ea-b71 -XX:+UseCompressedOops -
   XX:+UseConcMarkSweepGC
   :
   : 4 tests failed.
   : FAILED:
   org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.testAr
   abic
   PDF
   :
   : Error Message:
   : Invalid Date String:'Tue Mar 09 08:44:49 EEST 2010'
   :
   : Stack Trace:
   : org.apache.solr.common.SolrException: Invalid Date String:'Tue Mar
   09
   08:44:49 EEST 2010'
   : at
  
 
 __randomizedtesting.SeedInfo.seed([F05E63CDB3511062:9E9818C2B094BB3
   7]:0)
   : at
   org.apache.solr.schema.TrieDateField.parseMath(TrieDateField.java:150)
   : at 
   org.apache.solr.schema.TrieField.createField(TrieField.java:657)
   : at 
   org.apache.solr.schema.TrieField.createFields(TrieField.java:694)
   : at
  
 
 org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:4
   8
   )
   :

  1   2   >