[jira] [Updated] (SOLR-7275) Pluggable authorization module in Solr

2015-05-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7275:
---
Attachment: SOLR-7275.patch

Patch that adds request type info for /select [READ] and /update [WRITE] 
requests.

> Pluggable authorization module in Solr
> --
>
> Key: SOLR-7275
> URL: https://issues.apache.org/jira/browse/SOLR-7275
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch
>
>
> Solr needs an interface that makes it easy for different authorization 
> systems to be plugged into it. Here's what I plan on doing:
> Define an interface {{SolrAuthorizationPlugin}} with one single method 
> {{isAuthorized}}. This would take in a {{SolrRequestContext}} object and 
> return an {{SolrAuthorizationResponse}} object. The object as of now would 
> only contain a single boolean value but in the future could contain more 
> information e.g. ACL for document filtering etc.
> The reason why we need a context object is so that the plugin doesn't need to 
> understand Solr's capabilities e.g. how to extract the name of the collection 
> or other information from the incoming request as there are multiple ways to 
> specify the target collection for a request. Similarly request type can be 
> specified by {{qt}} or {{/handler_name}}.
> Flow:
> Request -> SolrDispatchFilter -> isAuthorized(context) -> Process/Return.
> {code}
> public interface SolrAuthorizationPlugin {
>   public SolrAuthorizationResponse isAuthorized(SolrRequestContext context);
> }
> {code}
> {code}
> public  class SolrRequestContext {
>   UserInfo; // Will contain user context from the authentication layer.
>   HTTPRequest request;
>   Enum OperationType; // Correlated with user roles.
>   String[] CollectionsAccessed;
>   String[] FieldsAccessed;
>   String Resource;
> }
> {code}
> {code}
> public class SolrAuthorizationResponse {
>   boolean authorized;
>   public boolean isAuthorized();
> }
> {code}
> User Roles: 
> * Admin
> * Collection Level:
>   * Query
>   * Update
>   * Admin
> Using this framework, an implementation could be written for specific 
> security systems e.g. Apache Ranger or Sentry. It would keep all the security 
> system specific code out of Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6213) StackOverflowException in Solr cloud's leader election

2015-05-12 Thread Forest Soup (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541359#comment-14541359
 ] 

Forest Soup edited comment on SOLR-6213 at 5/13/15 5:27 AM:


The stackoverflow exception is in the attachment.


was (Author: forest_soup):
The stackoverflow exception.

> StackOverflowException in Solr cloud's leader election
> --
>
> Key: SOLR-6213
> URL: https://issues.apache.org/jira/browse/SOLR-6213
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, Trunk
>Reporter: Dawid Weiss
>Priority: Critical
> Attachments: stackoverflow.txt
>
>
> This is what's causing test hangs (at least on FreeBSD, LUCENE-5786), 
> possibly on other machines too. The problem is stack overflow from looped 
> calls in:
> {code}
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderEl

[jira] [Updated] (SOLR-6213) StackOverflowException in Solr cloud's leader election

2015-05-12 Thread Forest Soup (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Forest Soup updated SOLR-6213:
--
Attachment: stackoverflow.txt

The stackoverflow exception.

> StackOverflowException in Solr cloud's leader election
> --
>
> Key: SOLR-6213
> URL: https://issues.apache.org/jira/browse/SOLR-6213
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, Trunk
>Reporter: Dawid Weiss
>Priority: Critical
> Attachments: stackoverflow.txt
>
>
> This is what's causing test hangs (at least on FreeBSD, LUCENE-5786), 
> possibly on other machines too. The problem is stack overflow from looped 
> calls in:
> {code}
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   >

[jira] [Commented] (SOLR-6213) StackOverflowException in Solr cloud's leader election

2015-05-12 Thread Forest Soup (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541350#comment-14541350
 ] 

Forest Soup commented on SOLR-6213:
---

I met the same issue within Solr 4.7.0.
Too many recursive calls with below lines:
at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:399)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:259)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:164)
at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:108)
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:289)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:399)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:259)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:164)
at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:108)
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:289)

> StackOverflowException in Solr cloud's leader election
> --
>
> Key: SOLR-6213
> URL: https://issues.apache.org/jira/browse/SOLR-6213
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, Trunk
>Reporter: Dawid Weiss
>Priority: Critical
>
> This is what's causing test hangs (at least on FreeBSD, LUCENE-5786), 
> possibly on other machines too. The problem is stack overflow from looped 
> calls in:
> {code}
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.ap

[jira] [Commented] (SOLR-6213) StackOverflowException in Solr cloud's leader election

2015-05-12 Thread Forest Soup (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541349#comment-14541349
 ] 

Forest Soup commented on SOLR-6213:
---

Can we set a max re-try number instead of keep always trying until stack over 
flow?

> StackOverflowException in Solr cloud's leader election
> --
>
> Key: SOLR-6213
> URL: https://issues.apache.org/jira/browse/SOLR-6213
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, Trunk
>Reporter: Dawid Weiss
>Priority: Critical
>
> This is what's causing test hangs (at least on FreeBSD, LUCENE-5786), 
> possibly on other machines too. The problem is stack overflow from looped 
> calls in:
> {code}
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderP

[jira] [Commented] (SOLR-5692) StackOverflowError during SolrCloud leader election process

2015-05-12 Thread Forest Soup (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541343#comment-14541343
 ] 

Forest Soup commented on SOLR-5692:
---

I met the same issue within Solr 4.7.0.
Too many recursive calls with below lines:
at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:399)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:259)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:164)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:108)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:289)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:399)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:259)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:164)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:108)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:289)

> StackOverflowError during SolrCloud leader election process
> ---
>
> Key: SOLR-5692
> URL: https://issues.apache.org/jira/browse/SOLR-5692
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6.1
>Reporter: Bojan Smid
>  Labels: difficulty-hard, impact-medium
> Attachments: recovery-stackoverflow.txt
>
>
> I have SolrCloud cluster with 7 nodes, each with few 1000 cores. I got this 
> StackOverflow few times when starting one of the nodes (just a piece of stack 
> trace, the rest repeats, leader election process obviously got stuck in 
> infinite repetition of steps):
> [2/4/14 3:42:43 PM] Bojan: 2014-02-04 15:18:01,947 
> [localhost-startStop-1-EventThread] ERROR org.apache.zookeeper.ClientCnxn- 
> Error while calling watcher 
> java.lang.StackOverflowError
> at java.security.AccessController.doPrivileged(Native Method)
> at java.io.PrintWriter.(PrintWriter.java:116)
> at java.io.PrintWriter.(PrintWriter.java:100)
> at org.apache.solr.common.SolrException.toStr(SolrException.java:138)
> at org.apache.solr.common.SolrException.log(SolrException.java:113)
> [2/4/14 3:42:58 PM] Bojan: at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:377)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:380)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
>  at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:380)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:380)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:380)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3109 - Still Failing

2015-05-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3109/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerConcurrent

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestSolrConfigHandlerConcurrent: 1) Thread[id=207, 
name=qtp207074561-207, state=RUNNABLE, 
group=TGRP-TestSolrConfigHandlerConcurrent] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618) at 
org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:532)
 at 
org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
 at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
 at 
org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:464) 
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:362) at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:175)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:168)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:105)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
 at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:300) 
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)  
   at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
 at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)   
  at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)   
  at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)  
   at org.eclipse.jetty.server.Server.handle(Server.java:497) at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) 
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSolrConfigHandlerConcurrent: 
   1) Thread[id=207, name=qtp207074561-207, state=RUNNABLE, 
group=TGRP-TestSolrConfigHandlerConcurrent]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
at 
org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:532

[jira] [Created] (SOLR-7537) Could not find or load main class org.apache.solr.util.SimplePostTool

2015-05-12 Thread Peng Li (JIRA)
Peng Li created SOLR-7537:
-

 Summary: Could not find or load main class 
org.apache.solr.util.SimplePostTool
 Key: SOLR-7537
 URL: https://issues.apache.org/jira/browse/SOLR-7537
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
 Environment: Windows 8.1, cygwin4.3.33
Reporter: Peng Li


In "solr-5.1.0/bin" folder, I typed below command "../doc" folder has 
"readme.docx"
sh post -c gettingstarted ../doc

And I got below exception:
c:\Java\jdk1.8.0_20/bin/java -classpath 
/cygdrive/c/Users/lipeng/_Main/Servers/solr-5.1.0/dist/solr-core-5.1.0.jar 
-Dauto=yes -Dc=gettingstarted -Ddata=files -Drecursive=yes 
org.apache.solr.util.SimplePostTool ../doc
Error: Could not find or load main class org.apache.solr.util.SimplePostTool

I followed instruction from here: http://lucene.apache.org/solr/quickstart.html

Can you help me to take a look at? Thank you!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12480 - Failure!

2015-05-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12480/
Java: 64bit/jdk1.8.0_60-ea-b12 -XX:-UseCompressedOops -XX:+UseG1GC

37 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.analysis.ja.TestJapaneseIterationMarkCharFilterFactory

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([7E880F65DE4809D0]:0)


FAILED:  
org.apache.lucene.analysis.ja.TestJapaneseIterationMarkCharFilterFactory.testKanjiOnlyIterationMarksWithJapaneseTokenizer

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7E880F65DE4809D0]:0)


FAILED:  org.apache.lucene.analysis.ja.TestJapaneseTokenizer.testDecomposition3

Error Message:
Could not initialize class 
org.apache.lucene.analysis.ja.dict.UnknownDictionary$SingletonHolder

Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.lucene.analysis.ja.dict.UnknownDictionary$SingletonHolder
at 
__randomizedtesting.SeedInfo.seed([7E880F65DE4809D0:1C14D845886994D2]:0)
at 
org.apache.lucene.analysis.ja.dict.UnknownDictionary.getInstance(UnknownDictionary.java:72)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:214)
at 
org.apache.lucene.analysis.ja.TestJapaneseTokenizer$3.createComponents(TestJapaneseTokenizer.java:85)
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:179)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:386)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:352)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:378)
at 
org.apache.lucene.analysis.ja.TestJapaneseTokenizer.testDecomposition3(TestJapaneseTokenizer.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Commented] (LUCENE-6464) Allow possibility to group contexts in AnalyzingInfixSuggester.loockup()

2015-05-12 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541217#comment-14541217
 ] 

Arcadius Ahouansou commented on LUCENE-6464:


Hello [~mikemccand]
Please have a look at the new patch when you have the chance.

I have now added 
{code}
public List lookup(CharSequence key, BooleanQuery contextQuery, 
int num, boolean allTermsRequired, boolean doHighlight)
{code}

> Allow possibility to group contexts in AnalyzingInfixSuggester.loockup()
> 
>
> Key: LUCENE-6464
> URL: https://issues.apache.org/jira/browse/LUCENE-6464
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.1
>Reporter: Arcadius Ahouansou
> Attachments: LUCENE-6464.patch, LUCENE-6464.patch
>
>
> This is an enhancement to LUCENE-6050 
> LUCENE-6050 added
> {code}
> lookup(CharSequence key, Map contextInfo, int 
> num, boolean allTermsRequired, boolean doHighlight)
> {code}
> which allowed to do something like
> (A OR B AND C OR D ...)
> In our use-case, we realise that we need grouping i.e
> (A OR B) AND (C OR D) AND (...)
> In other words, we need the intersection of multiple contexts.
> The attached patch allows to pass in a varargs of map, each one representing 
> the each group. Looks a bit heavy IMHO.
> This is an initial patch.
> The question to [~mikemccand] and [~janechang] is:
> is it better to expose a FilteredQuery/Query and let the user build their own 
> query instead of passing a map?
> Exposing a filteredQuery will probably give the best flexibility to the 
> end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6464) Allow possibility to group contexts in AnalyzingInfixSuggester.loockup()

2015-05-12 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated LUCENE-6464:
---
Attachment: LUCENE-6464.patch

Added lookup method with flexible filtering by BooleanQuery

> Allow possibility to group contexts in AnalyzingInfixSuggester.loockup()
> 
>
> Key: LUCENE-6464
> URL: https://issues.apache.org/jira/browse/LUCENE-6464
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.1
>Reporter: Arcadius Ahouansou
> Attachments: LUCENE-6464.patch, LUCENE-6464.patch
>
>
> This is an enhancement to LUCENE-6050 
> LUCENE-6050 added
> {code}
> lookup(CharSequence key, Map contextInfo, int 
> num, boolean allTermsRequired, boolean doHighlight)
> {code}
> which allowed to do something like
> (A OR B AND C OR D ...)
> In our use-case, we realise that we need grouping i.e
> (A OR B) AND (C OR D) AND (...)
> In other words, we need the intersection of multiple contexts.
> The attached patch allows to pass in a varargs of map, each one representing 
> the each group. Looks a bit heavy IMHO.
> This is an initial patch.
> The question to [~mikemccand] and [~janechang] is:
> is it better to expose a FilteredQuery/Query and let the user build their own 
> query instead of passing a map?
> Exposing a filteredQuery will probably give the best flexibility to the 
> end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6464) Allow possibility to group contexts in AnalyzingInfixSuggester.loockup()

2015-05-12 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated LUCENE-6464:
---
Attachment: (was: LUCENE-6464.patch)

> Allow possibility to group contexts in AnalyzingInfixSuggester.loockup()
> 
>
> Key: LUCENE-6464
> URL: https://issues.apache.org/jira/browse/LUCENE-6464
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.1
>Reporter: Arcadius Ahouansou
> Attachments: LUCENE-6464.patch
>
>
> This is an enhancement to LUCENE-6050 
> LUCENE-6050 added
> {code}
> lookup(CharSequence key, Map contextInfo, int 
> num, boolean allTermsRequired, boolean doHighlight)
> {code}
> which allowed to do something like
> (A OR B AND C OR D ...)
> In our use-case, we realise that we need grouping i.e
> (A OR B) AND (C OR D) AND (...)
> In other words, we need the intersection of multiple contexts.
> The attached patch allows to pass in a varargs of map, each one representing 
> the each group. Looks a bit heavy IMHO.
> This is an initial patch.
> The question to [~mikemccand] and [~janechang] is:
> is it better to expose a FilteredQuery/Query and let the user build their own 
> query instead of passing a map?
> Exposing a filteredQuery will probably give the best flexibility to the 
> end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6464) Allow possibility to group contexts in AnalyzingInfixSuggester.loockup()

2015-05-12 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated LUCENE-6464:
---
Attachment: LUCENE-6464.patch

added lookup with BooleanQuery as a filter

> Allow possibility to group contexts in AnalyzingInfixSuggester.loockup()
> 
>
> Key: LUCENE-6464
> URL: https://issues.apache.org/jira/browse/LUCENE-6464
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.1
>Reporter: Arcadius Ahouansou
> Attachments: LUCENE-6464.patch, LUCENE-6464.patch
>
>
> This is an enhancement to LUCENE-6050 
> LUCENE-6050 added
> {code}
> lookup(CharSequence key, Map contextInfo, int 
> num, boolean allTermsRequired, boolean doHighlight)
> {code}
> which allowed to do something like
> (A OR B AND C OR D ...)
> In our use-case, we realise that we need grouping i.e
> (A OR B) AND (C OR D) AND (...)
> In other words, we need the intersection of multiple contexts.
> The attached patch allows to pass in a varargs of map, each one representing 
> the each group. Looks a bit heavy IMHO.
> This is an initial patch.
> The question to [~mikemccand] and [~janechang] is:
> is it better to expose a FilteredQuery/Query and let the user build their own 
> query instead of passing a map?
> Exposing a filteredQuery will probably give the best flexibility to the 
> end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3108 - Failure

2015-05-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3108/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
shard1 is not consistent.  Got 961 from 
http://127.0.0.1:44749/mun/x/collection1lastClient and got 247 from 
http://127.0.0.1:44752/mun/x/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 961 from 
http://127.0.0.1:44749/mun/x/collection1lastClient and got 247 from 
http://127.0.0.1:44752/mun/x/collection1
at 
__randomizedtesting.SeedInfo.seed([615D83292E4106AA:E909BCF380BD6B52]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run

[jira] [Assigned] (SOLR-7536) adding fields to newly created managed-schema could sometimes cause error

2015-05-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned SOLR-7536:


Assignee: Steve Rowe

>  adding fields to newly created managed-schema could sometimes cause error
> --
>
> Key: SOLR-7536
> URL: https://issues.apache.org/jira/browse/SOLR-7536
> Project: Solr
>  Issue Type: Bug
>Reporter: Zilo Zongh
>Assignee: Steve Rowe
>
> When using managed schema in SolrCloud, adding fields into schema would 
> SOMETIMES end up prompting "Can't find resource 'schema.xml' in classpath or 
> '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server", there is of 
> course no schema.xml in configs, but 'schema.xml.bak' and 'managed-schema'
> Code to upload configs and create collection:
>  Path tempPath = getConfigPath();
>   client.uploadConfig(tempPath, name); //customized 
> configs with solrconfig.xml using ManagedIndexSchemaFactory
>   
>   if(numShards==0){
>   numShards = getNumNodes(client);
>   }
>   
>   Create request = new CollectionAdminRequest.Create();
>   request.setCollectionName(name);
>   request.setNumShards(numShards);
>   replicationFactor = 
> (replicationFactor==0?DEFAULT_REPLICA_FACTOR:replicationFactor);
>   request.setReplicationFactor(replicationFactor);
>   
> request.setMaxShardsPerNode(maxShardsPerNode==0?replicationFactor:maxShardsPerNode);
>   CollectionAdminResponse response = 
> request.process(client);
>  adding fields to schema, either by curl or by httpclient,  would sometimes 
> yield the following error, but the error can be fixed by RELOADING the newly 
> created collection once or several times:
> INFO  - [{  "responseHeader":{"status":500,"QTime":5},  
> "errors":["Error reading input String Can't find resource 'schema.xml' in 
> classpath or '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server"], 
>  "error":{"msg":"Can't find resource 'schema.xml' in classpath or 
> '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server",
> "trace":"java.io.IOException: Can't find resource 'schema.xml' in classpath 
> or '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server
>
>   at 
> org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:98)
>   at 
> org.apache.solr.schema.SchemaManager.getFreshManagedSchema(SchemaManager.java:421)
>   at 
> org.apache.solr.schema.SchemaManager.doOperations(SchemaManager.java:104)
>   at 
> org.apache.solr.schema.SchemaManager.performOperations(SchemaManager.java:94)
>   at 
> org.apache.solr.handler.SchemaHandler.handleRequestBody(SchemaHandler.java:57)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.

[jira] [Created] (SOLR-7536) adding fields to newly created managed-schema could sometimes cause error

2015-05-12 Thread Zilo Zongh (JIRA)
Zilo Zongh created SOLR-7536:


 Summary:  adding fields to newly created managed-schema could 
sometimes cause error
 Key: SOLR-7536
 URL: https://issues.apache.org/jira/browse/SOLR-7536
 Project: Solr
  Issue Type: Bug
Reporter: Zilo Zongh


When using managed schema in SolrCloud, adding fields into schema would 
SOMETIMES end up prompting "Can't find resource 'schema.xml' in classpath or 
'/configs/collectionName', cwd=/export/solr/solr-5.1.0/server", there is of 
course no schema.xml in configs, but 'schema.xml.bak' and 'managed-schema'

Code to upload configs and create collection:

 Path tempPath = getConfigPath();
client.uploadConfig(tempPath, name); //customized 
configs with solrconfig.xml using ManagedIndexSchemaFactory

if(numShards==0){
numShards = getNumNodes(client);
}

Create request = new CollectionAdminRequest.Create();
request.setCollectionName(name);
request.setNumShards(numShards);
replicationFactor = 
(replicationFactor==0?DEFAULT_REPLICA_FACTOR:replicationFactor);
request.setReplicationFactor(replicationFactor);

request.setMaxShardsPerNode(maxShardsPerNode==0?replicationFactor:maxShardsPerNode);
CollectionAdminResponse response = 
request.process(client);

 adding fields to schema, either by curl or by httpclient,  would sometimes 
yield the following error, but the error can be fixed by RELOADING the newly 
created collection once or several times:

INFO  - [{  "responseHeader":{"status":500,"QTime":5},  
"errors":["Error reading input String Can't find resource 'schema.xml' in 
classpath or '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server"],  
"error":{"msg":"Can't find resource 'schema.xml' in classpath or 
'/configs/collectionName', cwd=/export/solr/solr-5.1.0/server",
"trace":"java.io.IOException: Can't find resource 'schema.xml' in classpath or 
'/configs/collectionName', cwd=/export/solr/solr-5.1.0/server
 
at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:98)
at 
org.apache.solr.schema.SchemaManager.getFreshManagedSchema(SchemaManager.java:421)
at 
org.apache.solr.schema.SchemaManager.doOperations(SchemaManager.java:104)
at 
org.apache.solr.schema.SchemaManager.performOperations(SchemaManager.java:94)
at 
org.apache.solr.handler.SchemaHandler.handleRequestBody(SchemaHandler.java:57)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParse

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2297 - Failure!

2015-05-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2297/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:54311, 
http://127.0.0.1:54286, http://127.0.0.1:54295, http://127.0.0.1:54308, 
http://127.0.0.1:54304]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:54311, http://127.0.0.1:54286, 
http://127.0.0.1:54295, http://127.0.0.1:54308, http://127.0.0.1:54304]
at 
__randomizedtesting.SeedInfo.seed([BAF205B7AC41F334:32A63A6D02BD9ECC]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:280)
at 
org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:107)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsear

[jira] [Created] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-05-12 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-7535:


 Summary: Add UpdateStream to Streaming API and Streaming Expression
 Key: SOLR-7535
 URL: https://issues.apache.org/jira/browse/SOLR-7535
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, SolrJ
Reporter: Joel Bernstein
Priority: Minor


The ticket adds an UpdateStream implementation to the Streaming API and 
streaming expressions. The UpdateStream will wrap a TupleStream and send the 
Tuples it reads to a SolrCloud collection to be indexed.

This will allow users to pull data from different Solr Cloud collections, merge 
and transform the streams and send the transformed data to another Solr Cloud 
collection.









--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_80) - Build # 4682 - Failure!

2015-05-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4682/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.test

Error Message:
QUERY FAILED: 
xpath=/response/arr[@name='fields']/lst/str[@name='name'][.='newTestFieldInt449']
  request=/schema/fields?wt=xml  response=  0   62 
 _version_ long 
true true  
 constantField tdouble   id 
string true true true false true
   newTestFieldInt0 tlong   newTestFieldInt1 tlong  
 newTestFieldInt10 tlong   newTestFieldInt100 tlong 
  newTestFieldInt101 tlong   newTestFieldInt102 tlong 
  newTestFieldInt103 tlong   newTestFieldInt104 tlong 
  newTestFieldInt105 tlong   newTestFieldInt106 tlong 
  newTestFieldInt107 tlong   newTestFieldInt108 tlong 
  newTestFieldInt109 tlong   newTestFieldInt11 tlong  
 newTestFieldInt110 tlong   newTestFieldInt111 tlong 
  newTestFieldInt112 tlong   newTestFieldInt113 tlong 
  newTestFieldInt114 tlong   newTestFieldInt115 tlong 
  newTestFieldInt116 tlong   newTestFieldInt117 tlong 
  newTestFieldInt118 tlong   newTestFieldInt119 tlong 
  newTestFieldInt12 tlong   newTestFieldInt120 tlong 
  newTestFieldInt121 tlong   newTestFieldInt122 tlong 
  newTestFieldInt123 tlong   newTestFieldInt124 tlong 
  newTestFieldInt125 tlong   newTestFieldInt126 tlong 
  newTestFieldInt127 tlong   newTestFieldInt128 tlong 
  newTestFieldInt129 tlong   newTestFieldInt13 tlong  
 newTestFieldInt130 tlong   newTestFieldInt131 tlong 
  newTestFieldInt132 tlong   newTestFieldInt133 tlong 
  newTestFieldInt134 tlong   newTestFieldInt135 tlong 
  newTestFieldInt136 tlong   newTestFieldInt137 tlong 
  newTestFieldInt138 tlong   newTestFieldInt139 tlong 
  newTestFieldInt14 tlong   newTestFieldInt140 tlong 
  newTestFieldInt141 tlong   newTestFieldInt142 tlong 
  newTestFieldInt143 tlong   newTestFieldInt144 tlong 
  newTestFieldInt145 tlong   newTestFieldInt146 tlong 
  newTestFieldInt147 tlong   newTestFieldInt148 tlong 
  newTestFieldInt149 tlong   newTestFieldInt15 tlong  
 newTestFieldInt150 tlong   newTestFieldInt151 tlong 
  newTestFieldInt152 tlong   newTestFieldInt153 tlong 
  newTestFieldInt154 tlong   newTestFieldInt155 tlong 
  newTestFieldInt156 tlong   newTestFieldInt157 tlong 
  newTestFieldInt158 tlong   newTestFieldInt159 tlong 
  newTestFieldInt16 tlong   newTestFieldInt160 tlong 
  newTestFieldInt161 tlong   newTestFieldInt162 tlong 
  newTestFieldInt163 tlong   newTestFieldInt164 tlong 
  newTestFieldInt165 tlong   newTestFieldInt166 tlong 
  newTestFieldInt167 tlong   newTestFieldInt168 tlong 
  newTestFieldInt169 tlong   newTestFieldInt17 tlong  
 newTestFieldInt170 tlong   newTestFieldInt171 tlong 
  newTestFieldInt172 tlong   newTestFieldInt173 tlong 
  newTestFieldInt174 tlong   newTestFieldInt175 tlong 
  newTestFieldInt176 tlong   newTestFieldInt177 tlong 
  newTestFieldInt178 tlong   newTestFieldInt179 tlong 
  newTestFieldInt18 tlong   newTestFieldInt180 tlong 
  newTestFieldInt181 tlong   newTestFieldInt182 tlong 
  newTestFieldInt183 tlong   newTestFieldInt184 tlong 
  newTestFieldInt185 tlong   newTestFieldInt186 tlong 
  newTestFieldInt187 tlong   newTestFieldInt188 tlong 
  newTestFieldInt189 tlong   newTestFieldInt19 tlong  
 newTestFieldInt190 tlong   newTestFieldInt191 tlong 
  newTestFieldInt192 tlong   newTestFieldInt193 tlong 
  newTestFieldInt194 tlong   newTestFieldInt195 tlong 
  newTestFieldInt196 tlong   newTestFieldInt197 tlong 
  newTestFieldInt198 tlong   newTestFieldInt199 tlong 
  newTestFieldInt2 tlong   newTestFieldInt20  

[jira] [Resolved] (SOLR-7243) 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST

2015-05-12 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-7243.

   Resolution: Fixed
Fix Version/s: 5.2
   Trunk
 Assignee: Shawn Heisey

Tests and precommit are good.

> 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST
> 
>
> Key: SOLR-7243
> URL: https://issues.apache.org/jira/browse/SOLR-7243
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Hrishikesh Gadre
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7243.patch, SOLR-7243.patch, SOLR-7243.patch, 
> SOLR-7243.patch, SOLR-7243.patch
>
>
> We found this problem while upgrading Solr from 4.4 to 4.10.3. Our 
> integration test is similar to this Solr unit test,
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java
> Specifically we test if the Solr server returns BAD_REQUEST when provided 
> with incorrect input.The only difference is that it uses CloudSolrServer 
> instead of HttpSolrServer. The CloudSolrServer always returns SERVER_ERROR 
> error code. Please take a look
> https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L359
> I think we can improve the error handling by checking if the first exception 
> in the list is of type SolrException and if that is the case return the error 
> code associated with that exception. If the first exception is not of type 
> SolrException, then we can return SERVER_ERROR code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7243) 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST

2015-05-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540995#comment-14540995
 ] 

ASF subversion and git services commented on SOLR-7243:
---

Commit 1679122 from [~elyograg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1679122 ]

SOLR-7243: Return more informative error from CloudSolrServer when available. 
(merge trunk r1679099)

> 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST
> 
>
> Key: SOLR-7243
> URL: https://issues.apache.org/jira/browse/SOLR-7243
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Hrishikesh Gadre
>Priority: Minor
> Attachments: SOLR-7243.patch, SOLR-7243.patch, SOLR-7243.patch, 
> SOLR-7243.patch, SOLR-7243.patch
>
>
> We found this problem while upgrading Solr from 4.4 to 4.10.3. Our 
> integration test is similar to this Solr unit test,
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java
> Specifically we test if the Solr server returns BAD_REQUEST when provided 
> with incorrect input.The only difference is that it uses CloudSolrServer 
> instead of HttpSolrServer. The CloudSolrServer always returns SERVER_ERROR 
> error code. Please take a look
> https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L359
> I think we can improve the error handling by checking if the first exception 
> in the list is of type SolrException and if that is the case return the error 
> code associated with that exception. If the first exception is not of type 
> SolrException, then we can return SERVER_ERROR code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7503) Recovery after ZK session expiration happens in a single thread for all cores in a node

2015-05-12 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-7503:


Assignee: Timothy Potter

> Recovery after ZK session expiration happens in a single thread for all cores 
> in a node
> ---
>
> Key: SOLR-7503
> URL: https://issues.apache.org/jira/browse/SOLR-7503
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
>  Labels: impact-high
> Fix For: Trunk, 5.2
>
>
> Currently cores are registered in parallel in an executor. However, when 
> there's a ZK expiration, the recovery, which also happens in the register 
> call, happens in a single thread:
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L300
> We should make these happen in parallel as well so that recovery after ZK 
> expiration doesn't take forever.
> Thanks to [~mewmewball] for catching this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7503) Recovery after ZK session expiration happens in a single thread for all cores in a node

2015-05-12 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540928#comment-14540928
 ] 

Timothy Potter commented on SOLR-7503:
--

Cooking up a patch now.

> Recovery after ZK session expiration happens in a single thread for all cores 
> in a node
> ---
>
> Key: SOLR-7503
> URL: https://issues.apache.org/jira/browse/SOLR-7503
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
>  Labels: impact-high
> Fix For: Trunk, 5.2
>
>
> Currently cores are registered in parallel in an executor. However, when 
> there's a ZK expiration, the recovery, which also happens in the register 
> call, happens in a single thread:
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L300
> We should make these happen in parallel as well so that recovery after ZK 
> expiration doesn't take forever.
> Thanks to [~mewmewball] for catching this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Recent Java 9 commit (e5b66323ae45) breaks fsync on directory

2015-05-12 Thread Brian Burkhalter
I have created an enhancement issue here:

https://bugs.openjdk.java.net/browse/JDK-8080235

Brian

On May 12, 2015, at 3:10 PM, Brian Burkhalter  
wrote:

> I will create an issue now and post the ID.



Re: Recent Java 9 commit (e5b66323ae45) breaks fsync on directory

2015-05-12 Thread Brian Burkhalter

On May 12, 2015, at 2:11 PM, Alan Bateman  wrote:

> I've been too busy with other JDK 9 work to spend on time on this issue 
> recently. I do think we should introduce some support for this use-case as 
> it's clearly important.
> 
> The issue with adding a method to Files is that it requires rev'ing the 
> service provider interface so this is why I brought up the possibility of 
> have it work as an OpenOption.
> 
> We should start by creating a bug on. Brian Burkhalter may have created one 
> already.

I will create an issue now and post the ID.

I read the prior discussion thread 
(http://mail.openjdk.java.net/pipermail/nio-dev/2015-January/002979.html) and 
am looking into the two approaches already suggested (Files.forceSync() and 
OpenOption).

Any further comments appreciated.

Thanks,

Brian

Re: Why morphlines code is in Solr?

2015-05-12 Thread Shawn Heisey
On 5/12/2015 2:14 PM, Noble Paul wrote:
> When I said jar dependency , I did not mean , that we check in the jar
>
> we use httpclient, but if you checkout lucene trunk you don't get the
> httpclient jar ,but the build process will add it to the distribution

Doesn't that describe what happens with morphlines?  The build process
adds it to the distribution.

If I'm completely missing the point of what you're saying, I'll shut up
and let you elaborate and discuss it with other people who know what's
going on.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Recent Java 9 commit (e5b66323ae45) breaks fsync on directory

2015-05-12 Thread Alan Bateman

On 12/05/2015 19:59, Uwe Schindler wrote:

Hallo Alan,

I just wanted to come back to this issue, because there was no further communication 
recently regarding the behavior of Java 9 with opening a FileChannel on a directory to 
fsync the directory metadata. Unfortunately, this would break the improved data safety 
after commits to Apache Lucene indexes. This would affect many applications like Apache 
Solr and Elasticsearch that rely on fsyncing the metadata on UNIX systems (Linux, 
Solaris, MacOSX). Recently Elasticsearch also started to use the same approach for its 
transaction log! Because we (Apache Lucene) use atomic rename functionality to 
"publish" commits, losing the directory metadata after a power failure loses 
all data in the commit done before the failure. With Java 7 and Java 8 we already did 
extensive tests with remote controlled power plugs switching a test machine on and off 
and validating that index was intact. This is no longer working with Java 9 because of 
the change.

Our question now: The discussion was, to allow maybe another OpenOption to do 
such special stuff, that is important for other databases, too (I assume, 
Apache Derby, HSQLDB or other databases written in Java would like to do 
similar things). Is there anything we can do to make a proposal for a new API, 
like starting a JEP, opening a bug report,... I would take the opportunity to 
get involved into the OpenJDK project to help and bring this forward.

Maybe instead of complex open options, we should simply add a new method to the 
Files class: Files.force/fsync(Path fileOrDir, boolean metadata) that does the 
right thing depending on the file / operating system?
I've been too busy with other JDK 9 work to spend on time on this issue 
recently. I do think we should introduce some support for this use-case 
as it's clearly important.


The issue with adding a method to Files is that it requires rev'ing the 
service provider interface so this is why I brought up the possibility 
of have it work as an OpenOption.


We should start by creating a bug on. Brian Burkhalter may have created 
one already.


-Alan


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7243) 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST

2015-05-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540801#comment-14540801
 ] 

ASF subversion and git services commented on SOLR-7243:
---

Commit 1679099 from [~elyograg] in branch 'dev/trunk'
[ https://svn.apache.org/r1679099 ]

SOLR-7243: Return more informative error from CloudSolrServer when available.

> 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST
> 
>
> Key: SOLR-7243
> URL: https://issues.apache.org/jira/browse/SOLR-7243
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Hrishikesh Gadre
>Priority: Minor
> Attachments: SOLR-7243.patch, SOLR-7243.patch, SOLR-7243.patch, 
> SOLR-7243.patch, SOLR-7243.patch
>
>
> We found this problem while upgrading Solr from 4.4 to 4.10.3. Our 
> integration test is similar to this Solr unit test,
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java
> Specifically we test if the Solr server returns BAD_REQUEST when provided 
> with incorrect input.The only difference is that it uses CloudSolrServer 
> instead of HttpSolrServer. The CloudSolrServer always returns SERVER_ERROR 
> error code. Please take a look
> https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L359
> I think we can improve the error handling by checking if the first exception 
> in the list is of type SolrException and if that is the case return the error 
> code associated with that exception. If the first exception is not of type 
> SolrException, then we can return SERVER_ERROR code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7534) Handle internationalized quotes in queries

2015-05-12 Thread Dawid Weiss (JIRA)
Dawid Weiss created SOLR-7534:
-

 Summary: Handle internationalized quotes in queries
 Key: SOLR-7534
 URL: https://issues.apache.org/jira/browse/SOLR-7534
 Project: Solr
  Issue Type: Improvement
Reporter: Dawid Weiss
Priority: Minor


This is real feedback from a customer:

bq. Don't talk to me about “ and " as this is the number one problem we have 
with people composing SOLR phrase queries.

It's kind of funny at first... until you realize how many different quote 
characters are out there and that many applications (for example Microsoft 
Word) automatically "convert" standard ASCII quotes into locale-sensitive 
unicode variants (examples on blogs, documentation, etc.).

Perhaps there's a way to parse those various quote characters with some 
leniency?

http://en.wikipedia.org/wiki/Quotation_mark#Summary_table_for_all_languages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6220) Replica placement strategy for solrcloud

2015-05-12 Thread Jessica Cheng Mallet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540687#comment-14540687
 ] 

Jessica Cheng Mallet commented on SOLR-6220:


This doesn't seem to handle addReplica. I think it'd be nice to merge in the 
logic of the Assign class and get rid of it completely so there's just one 
place to handle any kind of replica assignment.

> Replica placement strategy for solrcloud
> 
>
> Key: SOLR-6220
> URL: https://issues.apache.org/jira/browse/SOLR-6220
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch, 
> SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch
>
>
> h1.Objective
> Most cloud based systems allow to specify rules on how the replicas/nodes of 
> a cluster are allocated . Solr should have a flexible mechanism through which 
> we should be able to control allocation of replicas or later change it to 
> suit the needs of the system
> All configurations are per collection basis. The rules are applied whenever a 
> replica is created in any of the shards in a given collection during
>  * collection creation
>  * shard splitting
>  * add replica
>  * createsshard
> There are two aspects to how replicas are placed: snitch and placement. 
> h2.snitch 
> How to identify the tags of nodes. Snitches are configured through collection 
> create command with the snitch param  . eg: snitch=EC2Snitch or 
> snitch=class:EC2Snitch
> h2.ImplicitSnitch 
> This is shipped by default with Solr. user does not need to specify 
> {{ImplicitSnitch}} in configuration. If the tags known to ImplicitSnitch are 
> present in the rules , it is automatically used,
> tags provided by ImplicitSnitch
> # cores :  No:of cores in the node
> # disk : Disk space available in the node 
> # host : host name of the node
> # node: node name 
> # D.* : These are values available from systrem propertes. {{D.key}} means a 
> value that is passed to the node as {{-Dkey=keyValue}} during the node 
> startup. It is possible to use rules like {{D.key:expectedVal,shard:*}}
> h2.Rules 
> This tells how many replicas for a given shard needs to be assigned to nodes 
> with the given key value pairs. These parameters will be passed on to the 
> collection CREATE api as a multivalued parameter  "rule" . The values will be 
> saved in the state of the collection as follows
> {code:Javascript}
> {
>  “mycollection”:{
>   “snitch”: {
>   class:“ImplicitSnitch”
> }
>   “rules”:[{"cores":"4-"}, 
>  {"replica":"1" ,"shard" :"*" ,"node":"*"},
>  {"disk":">100"}]
> }
> {code}
> A rule is specified as a pseudo JSON syntax . which is a map of keys and 
> values
> *Each collection can have any number of rules. As long as the rules do not 
> conflict with each other it should be OK. Or else an error is thrown
> * In each rule , shard and replica can be omitted
> ** default value of  replica is {{\*}} means ANY or you can specify a count 
> and an operand such as {{<}} (less than) or {{>}} (greater than)
> ** and the value of shard can be a shard name or  {{\*}} means EACH  or 
> {{**}} means ANY.  default value is {{\*\*}} (ANY)
> * There should be exactly one extra condition in a rule other than {{shard}} 
> and {{replica}}.  
> * all keys other than {{shard}} and {{replica}} are called tags and the tags 
> are nothing but values provided by the snitch for each node
> * By default certain tags such as {{node}}, {{host}}, {{port}} are provided 
> by the system implicitly 
> h3.How are nodes picked up? 
> Nodes are not picked up in random. The rules are used to first sort the nodes 
> according to affinity. For example, if there is a rule that says 
> {{disk:100+}} , nodes with  more disk space are given higher preference.  And 
> if the rule is {{disk:100-}} nodes with lesser disk space will be given 
> priority. If everything else is equal , nodes with fewer cores are given 
> higher priority
> h3.Fuzzy match
> Fuzzy match can be applied when strict matches fail .The values can be 
> prefixed {{~}} to specify fuzziness
> example rule
> {noformat}
>  #Example requirement "use only one replica of a shard in a host if possible, 
> if no matches found , relax that rule". 
> rack:*,shard:*,replica:<2~
> #Another example, assign all replicas to nodes with disk space of 100GB or 
> more,, or relax the rule if not possible. This will ensure that if a node 
> does not exist with 100GB disk, nodes are picked up the order of size say a 
> 85GB node would be picked up over 80GB disk node
> disk:>100~
> {noformat}
> Examples:
> {noformat}
> #in each rack there can be max two replicas of A given shard
>  rack:*,shard:*,replica:<3
> //in each rack there can

[jira] [Updated] (SOLR-7465) Flesh out solr/example/files

2015-05-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-7465:
---
Attachment: SOLR-7465.patch

Tweaked README slightly (updated title and added hyperlink).  Adjusted the 
hit.vm template to show file icon, like the techproducts template does, and  
changed the content_type facet to show friendly version.

content_type friendly version is awkward currently as two text/plain docs with 
different charset suffixes will both show as "txt" - let's do some content-type 
normalization during indexing, via a javascript update processor to make this 
nicer.

> Flesh out solr/example/files
> 
>
> Key: SOLR-7465
> URL: https://issues.apache.org/jira/browse/SOLR-7465
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.2
>
> Attachments: SOLR-7465.patch, SOLR-7465.patch, SOLR-7465.patch
>
>
> this README.txt file that's actually some sort of bizare shell script exists 
> on trunk in an otherwise empty directory...
> https://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/files/README.txt?view=markup
> filed added by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652721
> all of hte other files in this directory removed by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652759



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7275) Pluggable authorization module in Solr

2015-05-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7275:
---
Attachment: SOLR-7275.patch

Adding back a way to get the 1. Resource 2. RequestType from the context. Seems 
like it was removed in an update a couple of patches back.

> Pluggable authorization module in Solr
> --
>
> Key: SOLR-7275
> URL: https://issues.apache.org/jira/browse/SOLR-7275
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch, SOLR-7275.patch
>
>
> Solr needs an interface that makes it easy for different authorization 
> systems to be plugged into it. Here's what I plan on doing:
> Define an interface {{SolrAuthorizationPlugin}} with one single method 
> {{isAuthorized}}. This would take in a {{SolrRequestContext}} object and 
> return an {{SolrAuthorizationResponse}} object. The object as of now would 
> only contain a single boolean value but in the future could contain more 
> information e.g. ACL for document filtering etc.
> The reason why we need a context object is so that the plugin doesn't need to 
> understand Solr's capabilities e.g. how to extract the name of the collection 
> or other information from the incoming request as there are multiple ways to 
> specify the target collection for a request. Similarly request type can be 
> specified by {{qt}} or {{/handler_name}}.
> Flow:
> Request -> SolrDispatchFilter -> isAuthorized(context) -> Process/Return.
> {code}
> public interface SolrAuthorizationPlugin {
>   public SolrAuthorizationResponse isAuthorized(SolrRequestContext context);
> }
> {code}
> {code}
> public  class SolrRequestContext {
>   UserInfo; // Will contain user context from the authentication layer.
>   HTTPRequest request;
>   Enum OperationType; // Correlated with user roles.
>   String[] CollectionsAccessed;
>   String[] FieldsAccessed;
>   String Resource;
> }
> {code}
> {code}
> public class SolrAuthorizationResponse {
>   boolean authorized;
>   public boolean isAuthorized();
> }
> {code}
> User Roles: 
> * Admin
> * Collection Level:
>   * Query
>   * Update
>   * Admin
> Using this framework, an implementation could be written for specific 
> security systems e.g. Apache Ranger or Sentry. It would keep all the security 
> system specific code out of Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Why morphlines code is in Solr?

2015-05-12 Thread Noble Paul
When I said jar dependency , I did not mean , that we check in the jar

we use httpclient, but if you checkout lucene trunk you don't get the
httpclient jar ,but the build process will add it to the distribution

On Wed, May 13, 2015 at 1:06 AM, Shawn Heisey  wrote:

> On 5/12/2015 11:58 AM, Noble Paul wrote:
> > The Morphlines contrib that we have in Solr is copied over from
> > Morphlines. The files are exactly same. Isn't it better to just have a
> > jar dependency on that project?
>
> [solr@bigindy5 src]$ svn co
> https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x
>  lots of lines snipped 
> [solr@bigindy5 src]$ cd branch_5x
> [solr@bigindy5 branch_5x]$ find . -name "*.jar"
> [solr@bigindy5 branch_5x]$
>
> There are no jars in a clean source checkout.
>
> [solr@bigindy5 branch_5x]$ cd solr
> [solr@bigindy5 solr]$ ant clean dist
>  lots of lines snipped 
>
> At this point, after I have built the solr distribution, if I repeat the
> find command looking for jars, there are quite a lot of them.  I believe
> that the third-party jars are included in the contrib directory in the
> download so that an end user doesn't have to scour the Internet looking
> for dependencies, they are included in the Solr download.  I'm
> reasonably sure that those jars are retrieved during the compile by ivy,
> and the version retrieved is determined by the ivy config found in the
> lucene directory.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
-
Noble Paul


[jira] [Updated] (SOLR-7402) A default/OTB plugin authorization module in Solr

2015-05-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7402:
---
Attachment: SOLR-7402.patch

> A default/OTB plugin authorization module in Solr
> -
>
> Key: SOLR-7402
> URL: https://issues.apache.org/jira/browse/SOLR-7402
> Project: Solr
>  Issue Type: New Feature
>  Components: security, SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-7402.patch, SOLR-7402.patch, SOLR-7402.patch
>
>
> SOLR-7275 (yet to be committed at this point) would add a pluggable 
> authorization framework in Solr. We should have a default or a basic out of 
> the box implementation that solves the following:
> 1. Gives something OTB for end-users that can be used.
> 2. Provide a reference point on how to write a plugin.
> Currently, this is a part of the patch on SOLR-7275. I'm splitting it into 
> its own issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7402) A default/OTB plugin authorization module in Solr

2015-05-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7402:
---
Attachment: (was: SOLR-7402.patch)

> A default/OTB plugin authorization module in Solr
> -
>
> Key: SOLR-7402
> URL: https://issues.apache.org/jira/browse/SOLR-7402
> Project: Solr
>  Issue Type: New Feature
>  Components: security, SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-7402.patch, SOLR-7402.patch, SOLR-7402.patch
>
>
> SOLR-7275 (yet to be committed at this point) would add a pluggable 
> authorization framework in Solr. We should have a default or a basic out of 
> the box implementation that solves the following:
> 1. Gives something OTB for end-users that can be used.
> 2. Provide a reference point on how to write a plugin.
> Currently, this is a part of the patch on SOLR-7275. I'm splitting it into 
> its own issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_60-ea-b12) - Build # 12477 - Failure!

2015-05-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12477/
Java: 32bit/jdk1.8.0_60-ea-b12 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:48842/_sn/l, https://127.0.0.1:57373/_sn/l, 
https://127.0.0.1:42321/_sn/l, https://127.0.0.1:46750/_sn/l, 
https://127.0.0.1:52586/_sn/l]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:48842/_sn/l, 
https://127.0.0.1:57373/_sn/l, https://127.0.0.1:42321/_sn/l, 
https://127.0.0.1:46750/_sn/l, https://127.0.0.1:52586/_sn/l]
at 
__randomizedtesting.SeedInfo.seed([60BDC287EA51A21:8E5FE3F2D05977D9]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1074)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:846)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:789)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:281)
at 
org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:108)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.eva

[jira] [Updated] (SOLR-7402) A default/OTB plugin authorization module in Solr

2015-05-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7402:
---
Attachment: SOLR-7402.patch

Patch updated to get it in sync with the latest patch on SOLR-7275 and also 
adds whitelisting of IPs.

> A default/OTB plugin authorization module in Solr
> -
>
> Key: SOLR-7402
> URL: https://issues.apache.org/jira/browse/SOLR-7402
> Project: Solr
>  Issue Type: New Feature
>  Components: security, SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-7402.patch, SOLR-7402.patch, SOLR-7402.patch
>
>
> SOLR-7275 (yet to be committed at this point) would add a pluggable 
> authorization framework in Solr. We should have a default or a basic out of 
> the box implementation that solves the following:
> 1. Gives something OTB for end-users that can be used.
> 2. Provide a reference point on how to write a plugin.
> Currently, this is a part of the patch on SOLR-7275. I'm splitting it into 
> its own issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Why morphlines code is in Solr?

2015-05-12 Thread Shawn Heisey
On 5/12/2015 11:58 AM, Noble Paul wrote:
> The Morphlines contrib that we have in Solr is copied over from
> Morphlines. The files are exactly same. Isn't it better to just have a
> jar dependency on that project?

[solr@bigindy5 src]$ svn co
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x
 lots of lines snipped 
[solr@bigindy5 src]$ cd branch_5x
[solr@bigindy5 branch_5x]$ find . -name "*.jar"
[solr@bigindy5 branch_5x]$

There are no jars in a clean source checkout.

[solr@bigindy5 branch_5x]$ cd solr
[solr@bigindy5 solr]$ ant clean dist
 lots of lines snipped 

At this point, after I have built the solr distribution, if I repeat the
find command looking for jars, there are quite a lot of them.  I believe
that the third-party jars are included in the contrib directory in the
download so that an end user doesn't have to scour the Internet looking
for dependencies, they are included in the Solr download.  I'm
reasonably sure that those jars are retrieved during the compile by ivy,
and the version retrieved is determined by the ivy config found in the
lucene directory.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3106 - Failure

2015-05-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3106/

1 tests failed.
REGRESSION:  org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([16A16C0A2762F5D:C0A2CB860310FEF4]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:576)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability(TestLBHttpSolrClient.java:219)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.S

[jira] [Updated] (SOLR-7465) Flesh out solr/example/files

2015-05-12 Thread Esther Quansah (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Quansah updated SOLR-7465:
-
Attachment: SOLR-7465.patch

Updated patch

> Flesh out solr/example/files
> 
>
> Key: SOLR-7465
> URL: https://issues.apache.org/jira/browse/SOLR-7465
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.2
>
> Attachments: SOLR-7465.patch, SOLR-7465.patch
>
>
> this README.txt file that's actually some sort of bizare shell script exists 
> on trunk in an otherwise empty directory...
> https://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/files/README.txt?view=markup
> filed added by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652721
> all of hte other files in this directory removed by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652759



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7465) Flesh out solr/example/files

2015-05-12 Thread Esther Quansah (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Quansah updated SOLR-7465:
-
Attachment: SOLR-7465.patch

Updated README for example/files 

> Flesh out solr/example/files
> 
>
> Key: SOLR-7465
> URL: https://issues.apache.org/jira/browse/SOLR-7465
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.2
>
> Attachments: SOLR-7465.patch
>
>
> this README.txt file that's actually some sort of bizare shell script exists 
> on trunk in an otherwise empty directory...
> https://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/files/README.txt?view=markup
> filed added by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652721
> all of hte other files in this directory removed by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652759



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b60) - Build # 12651 - Failure!

2015-05-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12651/
Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls

Error Message:
Shard split did not complete. Last recorded state: running 
expected:<[completed]> but was:<[running]>

Stack Trace:
org.junit.ComparisonFailure: Shard split did not complete. Last recorded state: 
running expected:<[completed]> but was:<[running]>
at 
__randomizedtesting.SeedInfo.seed([8B115F850A14869C:D375D3E40C7E2E48]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls(CollectionsAPIAsyncDistributedZkTest.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carr

RE: Recent Java 9 commit (e5b66323ae45) breaks fsync on directory

2015-05-12 Thread Uwe Schindler
Hallo Alan,

I just wanted to come back to this issue, because there was no further 
communication recently regarding the behavior of Java 9 with opening a 
FileChannel on a directory to fsync the directory metadata. Unfortunately, this 
would break the improved data safety after commits to Apache Lucene indexes. 
This would affect many applications like Apache Solr and Elasticsearch that 
rely on fsyncing the metadata on UNIX systems (Linux, Solaris, MacOSX). 
Recently Elasticsearch also started to use the same approach for its 
transaction log! Because we (Apache Lucene) use atomic rename functionality to 
"publish" commits, losing the directory metadata after a power failure loses 
all data in the commit done before the failure. With Java 7 and Java 8 we 
already did extensive tests with remote controlled power plugs switching a test 
machine on and off and validating that index was intact. This is no longer 
working with Java 9 because of the change.

Our question now: The discussion was, to allow maybe another OpenOption to do 
such special stuff, that is important for other databases, too (I assume, 
Apache Derby, HSQLDB or other databases written in Java would like to do 
similar things). Is there anything we can do to make a proposal for a new API, 
like starting a JEP, opening a bug report,... I would take the opportunity to 
get involved into the OpenJDK project to help and bring this forward.

Maybe instead of complex open options, we should simply add a new method to the 
Files class: Files.force/fsync(Path fileOrDir, boolean metadata) that does the 
right thing depending on the file / operating system?

The Java 7 / Java 8 approach we use at the moment is a bit of undocumented hack 
already (guarded in a try/catch), because some systems like Windows does not 
allow fsync on directories (Windows already ensure that the metadata is written 
correctly after atomic rename). On the other hand, MacOSX looks like ignoring 
fsync requests completely - also on files - if you don't use a special fnctl. 
So adding an API that works around the different operating system specialties 
would be very good.

Uwe

-
Uwe Schindler
uschind...@apache.org 
ASF Member, Apache Lucene PMC / Committer
Bremen, Germany
http://lucene.apache.org/

> -Original Message-
> From: nio-dev [mailto:nio-dev-boun...@openjdk.java.net] On Behalf Of
> Uwe Schindler
> Sent: Friday, January 09, 2015 7:56 PM
> To: 'Alan Bateman'; nio-...@openjdk.java.net
> Cc: rory.odonn...@oracle.com; 'Balchandra Vaidya'
> Subject: RE: Recent Java 9 commit (e5b66323ae45) breaks fsync on directory
> 
> Hi Alan,
> 
> Thank you for the quick response!
> 
> The requirement to fsync on the directory from Java came already quite
> often on the web (also before Java 7 release - but, before Java 7 it was
> impossible to do from Java code).
> 
> This is one example from before Java 7:
> http://www.quora.com/Is-there-any-way-to-fsync%28%29-a-directory-
> from-Java
> 
> Stackoverflow has some questions about this, too. A famous one (ranked #1
> at Google is this one):
> http://stackoverflow.com/questions/7694307/using-filechannel-to-fsync-a-
> directory-with-nio-2
> 
> In fact this is exactly what we do in Lucene. The question here "Can I count
> on this working on all Unix platforms, in future versions of Java, and in non-
> Oracle JVMs?" is now answered -> NO.
> 
> Fsyncing on a directory is in most cases not needed for "standard java
> programs", but for those who really want to do this (like Lucene or Hadoop),
> maybe the idea with a separate OpenOption would be an idea! In Lucene
> code we can (which has Java 7 as minimum requirement) look with reflection
> for the new OpenOption and pass it. Unfortunately, people using currently
> released Lucene/Solr/Elasticsearch versions can no longer be sure that their
> index survives power outages, if they run it with Java 9. If we can early step
> in and test the new API, we can already release artifacts which at least "try"
> to use the new OpenOption (if available) and fall back to Java 7/Java 8
> semantics otherwise.
> 
> Personally, I would prefer to just document that opening a file channel for
> sure only works with regular files, but may fail with other types of files 
> (think
> of directories, or /dev/xxx devices). The code like it is now was working fine
> for 2 major Java releases, so why change semantics? If somebody
> accidentially opens a directory for reading, it is perfectly fine if he gets 
> an
> IOException a bit delayed. If one opens a block device and writes a non block
> aligned bunch of data, it will fail, too. You patch does not handle this 
> case, it
> only tests for directories. So I think we should leave it up to the operating
> system what you can do with a "file".
> 
> About Windows: In fact, you can also open a directory with CreateFile() [1],
> but with standard flags this fails with access denied (this is what we see in
> Java 7 and Java 8). You have to pass FILE_FLAG_BACKU

[jira] [Commented] (SOLR-7243) 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST

2015-05-12 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540378#comment-14540378
 ] 

Hrishikesh Gadre commented on SOLR-7243:


[~elyograg] I don't see any negative response for this. Should we commit?

> 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST
> 
>
> Key: SOLR-7243
> URL: https://issues.apache.org/jira/browse/SOLR-7243
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Hrishikesh Gadre
>Priority: Minor
> Attachments: SOLR-7243.patch, SOLR-7243.patch, SOLR-7243.patch, 
> SOLR-7243.patch, SOLR-7243.patch
>
>
> We found this problem while upgrading Solr from 4.4 to 4.10.3. Our 
> integration test is similar to this Solr unit test,
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java
> Specifically we test if the Solr server returns BAD_REQUEST when provided 
> with incorrect input.The only difference is that it uses CloudSolrServer 
> instead of HttpSolrServer. The CloudSolrServer always returns SERVER_ERROR 
> error code. Please take a look
> https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L359
> I think we can improve the error handling by checking if the first exception 
> in the list is of type SolrException and if that is the case return the error 
> code associated with that exception. If the first exception is not of type 
> SolrException, then we can return SERVER_ERROR code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Why morphlines code is in Solr?

2015-05-12 Thread Noble Paul
The Morphlines contrib that we have in Solr is copied over from Morphlines.
The files are exactly same. Isn't it better to just have a jar dependency
on that project?


-- 
-
Noble Paul


[jira] [Comment Edited] (SOLR-7533) DIH doesn't return "Indexing completed" when 0 documents processed

2015-05-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540276#comment-14540276
 ] 

Shawn Heisey edited comment on SOLR-7533 at 5/12/15 5:26 PM:
-

Determining DIH status in a program is a far harder problem than it needs to 
be.  I've filed a number of issues.  I have found the DIH code to be very 
layered, using abstractions effectively, but hard to understand for someone 
who's not familiar with it.  I'm sure that once you become familiar with it, 
the use of abstraction actually helps, but it's a barrier when you don't know 
the code already.

This looks like a nearly identical issue to SOLR-2729, but in newer versions 
the extraneous space has been removed from the "Time taken" field name.  I 
can't tell if the original command was a delta-import, but I assume that it 
probably was, which would make this a duplicate of SOLR-2729.


was (Author: elyograg):
Determining DIH status in a program is a far harder problem than it needs to 
be.  I've filed a number of issues.  I have found the DIH code to be very 
layered, using abstractions effectively, but hard to understand for someone 
who's not familiar with it.  I'm sure that once you become familiar with it, 
the use of abstraction actually helps, but it's a barrier when you don't know 
the code already.

This looks like a nearly identical issue to SOLR-2729, but in newer versions 
the extraneous space has been removed from the "Time taken" field name, and the 
"Indexing complete" message is entirely gone.


> DIH doesn't return "Indexing completed" when 0 documents processed
> --
>
> Key: SOLR-7533
> URL: https://issues.apache.org/jira/browse/SOLR-7533
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 5.1
>Reporter: Jellyfrog
>
> Normally, the status for a DIH when done will be something like:
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "initArgs": [
> "defaults",
> [
> "config",
> "data_import.xml"
> ]
> ],
> "command": "status",
> "status": "idle",
> "importResponse": "",
> "statusMessages": {
> "Total Requests made to DataSource": "1",
> "Total Rows Fetched": "480463",
> "Total Documents Skipped": "0",
> "Full Dump Started": "2015-04-21 14:16:17",
> "": "Indexing completed. Added/Updated: 480463 documents. Deleted 0 
> documents.",
> "Total Documents Processed": "480463",
> "Time taken": "0:12:31.863"
> }
> }
> {code}
> But when it processes 0 rows, it's missing the "Indexing completed" part:
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "initArgs": [
> "defaults",
> [
> "config",
> "data_import.xml"
> ]
> ],
> "command": "status",
> "status": "idle",
> "importResponse": "",
> "statusMessages": {
> "Total Requests made to DataSource": "1",
> "Total Rows Fetched": "0",
> "Total Documents Processed": "0",
> "Total Documents Skipped": "0",
> "Full Dump Started": "2015-05-12 17:39:44",
> "Time taken": "0:0:2.805"
> }
> }
> {code}
> This makes the output very inconsistent and harder to handle programatically 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7533) DIH doesn't return "Indexing completed" when 0 documents processed

2015-05-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540276#comment-14540276
 ] 

Shawn Heisey commented on SOLR-7533:


Determining DIH status in a program is a far harder problem than it needs to 
be.  I've filed a number of issues.  I have found the DIH code to be very 
layered, using abstractions effectively, but hard to understand for someone 
who's not familiar with it.  I'm sure that once you become familiar with it, 
the use of abstraction actually helps, but it's a barrier when you don't know 
the code already.

This looks like a nearly identical issue to SOLR-2729, but in newer versions 
the extraneous space has been removed from the "Time taken" field name, and the 
"Indexing complete" message is entirely gone.


> DIH doesn't return "Indexing completed" when 0 documents processed
> --
>
> Key: SOLR-7533
> URL: https://issues.apache.org/jira/browse/SOLR-7533
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 5.1
>Reporter: Jellyfrog
>
> Normally, the status for a DIH when done will be something like:
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "initArgs": [
> "defaults",
> [
> "config",
> "data_import.xml"
> ]
> ],
> "command": "status",
> "status": "idle",
> "importResponse": "",
> "statusMessages": {
> "Total Requests made to DataSource": "1",
> "Total Rows Fetched": "480463",
> "Total Documents Skipped": "0",
> "Full Dump Started": "2015-04-21 14:16:17",
> "": "Indexing completed. Added/Updated: 480463 documents. Deleted 0 
> documents.",
> "Total Documents Processed": "480463",
> "Time taken": "0:12:31.863"
> }
> }
> {code}
> But when it processes 0 rows, it's missing the "Indexing completed" part:
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "initArgs": [
> "defaults",
> [
> "config",
> "data_import.xml"
> ]
> ],
> "command": "status",
> "status": "idle",
> "importResponse": "",
> "statusMessages": {
> "Total Requests made to DataSource": "1",
> "Total Rows Fetched": "0",
> "Total Documents Processed": "0",
> "Total Documents Skipped": "0",
> "Full Dump Started": "2015-05-12 17:39:44",
> "Time taken": "0:0:2.805"
> }
> }
> {code}
> This makes the output very inconsistent and harder to handle programatically 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6479) Create utility to generate Classifier's confusion matrix

2015-05-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540233#comment-14540233
 ] 

ASF subversion and git services commented on LUCENE-6479:
-

Commit 1679006 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1679006 ]

LUCENE-6479 - added ConfusionMatrixGenerator

> Create utility to generate Classifier's confusion matrix
> 
>
> Key: LUCENE-6479
> URL: https://issues.apache.org/jira/browse/LUCENE-6479
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: Trunk
>
>
> In order to debug and compare accuracy of {{Classifiers}} it's often useful 
> to print the related [confusion 
> matrix|http://en.wikipedia.org/wiki/Confusion_matrix] so it'd be good to 
> provide such an utility class/method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6045) Refator classifier APIs to work better with multi threading

2015-05-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540230#comment-14540230
 ] 

ASF subversion and git services commented on LUCENE-6045:
-

Commit 1679005 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1679005 ]

LUCENE-6045 - refactored BPC constructor to be more consistent with others

> Refator classifier APIs to work better with multi threading
> ---
>
> Key: LUCENE-6045
> URL: https://issues.apache.org/jira/browse/LUCENE-6045
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: Trunk
>
>
> In 
> https://issues.apache.org/jira/browse/LUCENE-4345?focusedCommentId=13454729&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13454729
>  [~simonw] pointed out that the current Classifier API doesn't work well in 
> multi threading environments: 
> bq. The interface you defined has some problems with respect to 
> Multi-Threading IMO. The interface itself suggests that this class is 
> stateful and you have to call methods in a certain order and at the same you 
> need to make sure that it is not published for read access before training is 
> done. I think it would be wise to pass in all needed objects as constructor 
> arguments and make the references final so it can be shared across threads 
> and add an interface that represents the trained model computed offline? In 
> this case it doesn't really matter but in the future it might make sense. We 
> can also skip the model interface entirely and remove the training method 
> until we have some impls that really need to be trained.
> I missed that at that point but I think for 6.0 (?) it would be wise to 
> rearrange the API to address that properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7531) Config API is merging certain key names together

2015-05-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-7531:


Assignee: Noble Paul

> Config API is merging certain key names together
> 
>
> Key: SOLR-7531
> URL: https://issues.apache.org/jira/browse/SOLR-7531
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: Trunk, 5.2
>
>
> Starting from a new Solr 5.0 install
> {code}
> ./bin/solr start -e schemaless
> curl 'http://localhost:8983/solr/gettingstarted/config' > config.json
> {code}
> Open config.json and note that there is a key called "autoCommmitMaxDocs" 
> under the updateHandler section.
> {code}
> curl 'http://localhost:8983/solr/gettingstarted/config' -H 
> 'Content-type:application/json' -d '{"set-property" : 
> {"updateHandler.autoCommit.maxDocs" : 5000}}'
> curl 'http://localhost:8983/solr/gettingstarted/config' > config.json
> {code}
> Open config.json and note that both the value of updateHandler > autoCommit > 
> maxDocs and updateHandler > autoCommitMaxDocs is now set to 5000



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4802 - Failure!

2015-05-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4802/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([3195001EA64ADD1B:8814D6C18AA0D991]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:794)
at 
org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:14&qt=standard&start=0&rows=20&version=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:787)
... 40 more




Build Log:
[...truncated 10133 lines...]
   [junit4] Suite: org.apache.so

[jira] [Resolved] (LUCENE-6472) Add min and max document options to global ordinal join

2015-05-12 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen resolved LUCENE-6472.
---
Resolution: Pending Closed

> Add min and max document options to global ordinal join
> ---
>
> Key: LUCENE-6472
> URL: https://issues.apache.org/jira/browse/LUCENE-6472
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Fix For: 5.2
>
> Attachments: LUCENE-6472.patch
>
>
> This feature allows to only match "to" documents that have at least between 
> min and max matching  "from" documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6472) Add min and max document options to global ordinal join

2015-05-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540121#comment-14540121
 ] 

ASF subversion and git services commented on LUCENE-6472:
-

Commit 1678991 from [~martijn.v.groningen] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1678991 ]

LUCENE-6472: Added min and max document options to global ordinal join

> Add min and max document options to global ordinal join
> ---
>
> Key: LUCENE-6472
> URL: https://issues.apache.org/jira/browse/LUCENE-6472
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Fix For: 5.2
>
> Attachments: LUCENE-6472.patch
>
>
> This feature allows to only match "to" documents that have at least between 
> min and max matching  "from" documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6472) Add min and max document options to global ordinal join

2015-05-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540102#comment-14540102
 ] 

ASF subversion and git services commented on LUCENE-6472:
-

Commit 1678989 from [~martijn.v.groningen] in branch 'dev/trunk'
[ https://svn.apache.org/r1678989 ]

LUCENE-6472: Added min and max document options to global ordinal join

> Add min and max document options to global ordinal join
> ---
>
> Key: LUCENE-6472
> URL: https://issues.apache.org/jira/browse/LUCENE-6472
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Fix For: 5.2
>
> Attachments: LUCENE-6472.patch
>
>
> This feature allows to only match "to" documents that have at least between 
> min and max matching  "from" documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7533) DIH doesn't return "Indexing completed" when 0 documents processed

2015-05-12 Thread Jellyfrog (JIRA)
Jellyfrog created SOLR-7533:
---

 Summary: DIH doesn't return "Indexing completed" when 0 documents 
processed
 Key: SOLR-7533
 URL: https://issues.apache.org/jira/browse/SOLR-7533
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 5.1, 5.0
Reporter: Jellyfrog


Normally, the status for a DIH when done will be something like:
{code}
{
"responseHeader": {
"status": 0,
"QTime": 0
},
"initArgs": [
"defaults",
[
"config",
"data_import.xml"
]
],
"command": "status",
"status": "idle",
"importResponse": "",
"statusMessages": {
"Total Requests made to DataSource": "1",
"Total Rows Fetched": "480463",
"Total Documents Skipped": "0",
"Full Dump Started": "2015-04-21 14:16:17",
"": "Indexing completed. Added/Updated: 480463 documents. Deleted 0 
documents.",
"Total Documents Processed": "480463",
"Time taken": "0:12:31.863"
}
}
{code}

But when it processes 0 rows, it's missing the "Indexing completed" part:
{code}
{
"responseHeader": {
"status": 0,
"QTime": 0
},
"initArgs": [
"defaults",
[
"config",
"data_import.xml"
]
],
"command": "status",
"status": "idle",
"importResponse": "",
"statusMessages": {
"Total Requests made to DataSource": "1",
"Total Rows Fetched": "0",
"Total Documents Processed": "0",
"Total Documents Skipped": "0",
"Full Dump Started": "2015-05-12 17:39:44",
"Time taken": "0:0:2.805"
}
}
{code}

This makes the output very inconsistent and harder to handle programatically 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-329) Fuzzy query scoring issues

2015-05-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540069#comment-14540069
 ] 

Adrien Grand edited comment on LUCENE-329 at 5/12/15 3:44 PM:
--

It's not correct to do {{maxTtf = Math.max(ttf, maxTtf)}} because the ttf can 
sometimes be -1, so it would rather need to be something like {{maxTtf = ttf == 
-1 || maxTtf == -1 ? -1 : Math.max(ttf, maxTtf)}}.

Also I liked it better in the previous patch how you built a new TermContext 
instance instead of modifying the current one in place. Maybe you could add it 
back?


was (Author: jpountz):
It's not correct to do {{maxTtf = Math.max(ttf, maxTtf)}} because the ttf can 
sometimes be -1, so it would rather need to be something like {{maxTtf = ttf == 
-1 ? -1 : Math.max(ttf, maxTtf)}}.

Also I liked it better in the previous patch how you built a new TermContext 
instance instead of modifying the current one in place. Maybe you could add it 
back?

> Fuzzy query scoring issues
> --
>
> Key: LUCENE-329
> URL: https://issues.apache.org/jira/browse/LUCENE-329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 1.2
> Environment: Operating System: All
> Platform: All
>Reporter: Mark Harwood
>Assignee: Mark Harwood
>Priority: Minor
> Fix For: 5.x
>
> Attachments: ASF.LICENSE.NOT.GRANTED--patch.txt, LUCENE-329.patch, 
> LUCENE-329.patch
>
>
> Queries which automatically produce multiple terms (wildcard, range, prefix, 
> fuzzy etc)currently suffer from two problems:
> 1) Scores for matching documents are significantly smaller than term queries 
> because of the volume of terms introduced (A match on query Foo~ is 0.1 
> whereas a match on query Foo is 1).
> 2) The rarer forms of expanded terms are favoured over those of more common 
> forms because of the IDF. When using Fuzzy queries for example, rare mis-
> spellings typically appear in results before the more common correct 
> spellings.
> I will attach a patch that corrects the issues identified above by 
> 1) Overriding Similarity.coord to counteract the downplaying of scores 
> introduced by expanding terms.
> 2) Taking the IDF factor of the most common form of expanded terms as the 
> basis of scoring all other expanded terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-329) Fuzzy query scoring issues

2015-05-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540069#comment-14540069
 ] 

Adrien Grand commented on LUCENE-329:
-

It's not correct to do {{maxTtf = Math.max(ttf, maxTtf)}} because the ttf can 
sometimes be -1, so it would rather need to be something like {{maxTtf = ttf == 
-1 ? -1 : Math.max(ttf, maxTtf)}}.

Also I liked it better in the previous patch how you built a new TermContext 
instance instead of modifying the current one in place. Maybe you could add it 
back?

> Fuzzy query scoring issues
> --
>
> Key: LUCENE-329
> URL: https://issues.apache.org/jira/browse/LUCENE-329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 1.2
> Environment: Operating System: All
> Platform: All
>Reporter: Mark Harwood
>Assignee: Mark Harwood
>Priority: Minor
> Fix For: 5.x
>
> Attachments: ASF.LICENSE.NOT.GRANTED--patch.txt, LUCENE-329.patch, 
> LUCENE-329.patch
>
>
> Queries which automatically produce multiple terms (wildcard, range, prefix, 
> fuzzy etc)currently suffer from two problems:
> 1) Scores for matching documents are significantly smaller than term queries 
> because of the volume of terms introduced (A match on query Foo~ is 0.1 
> whereas a match on query Foo is 1).
> 2) The rarer forms of expanded terms are favoured over those of more common 
> forms because of the IDF. When using Fuzzy queries for example, rare mis-
> spellings typically appear in results before the more common correct 
> spellings.
> I will attach a patch that corrects the issues identified above by 
> 1) Overriding Similarity.coord to counteract the downplaying of scores 
> introduced by expanding terms.
> 2) Taking the IDF factor of the most common form of expanded terms as the 
> basis of scoring all other expanded terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6458) MultiTermQuery's FILTER rewrite method should support skipping whenever possible

2015-05-12 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6458:
-
Attachment: LUCENE-6458.patch
wikimedium.10M.nostopwords.tasks

I did some more benchmarking of the change with filters (see attached tasks 
file) and various thresholds (and a fixed seed):

{noformat}
16
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
 MTQ   24.33  (7.5%)   20.67  (7.3%)  
-15.1% ( -27% -0%)
  IntNRQ   20.38  (7.3%)   17.85 (11.9%)  
-12.4% ( -29% -7%)
   IntNRQ_508.94 (10.1%)8.67  (8.6%)   
-3.0% ( -19% -   17%)
  MTQ_509.05  (7.9%)8.93  (5.3%)   
-1.3% ( -13% -   12%)
   IntNRQ_10   13.72 (12.7%)   13.60 (11.9%)   
-0.9% ( -22% -   27%)
IntNRQ_1   17.53 (17.1%)   17.53 (16.3%)
0.0% ( -28% -   40%)
  MTQ_10   13.70 (11.2%)   13.89  (8.7%)
1.4% ( -16% -   23%)
   MTQ_1   19.11 (15.8%)   21.43 (18.0%)   
12.1% ( -18% -   54%)

64
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  IntNRQ   20.53  (6.9%)   16.42  (5.3%)  
-20.0% ( -30% -   -8%)
 MTQ   24.31  (7.3%)   20.34  (6.4%)  
-16.3% ( -27% -   -2%)
   IntNRQ_508.87  (9.2%)8.31  (6.5%)   
-6.3% ( -20% -   10%)
   IntNRQ_10   13.55 (12.7%)   12.80 (10.2%)   
-5.6% ( -25% -   19%)
IntNRQ_1   17.27 (16.3%)   16.38 (13.1%)   
-5.2% ( -29% -   28%)
  MTQ_509.00  (7.6%)9.02  (4.5%)
0.3% ( -10% -   13%)
  MTQ_10   13.65 (11.1%)   14.73  (8.2%)
7.9% ( -10% -   30%)
   MTQ_1   18.95 (15.1%)   25.32 (17.2%)   
33.6% (   1% -   77%)

256
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  IntNRQ   20.43  (9.4%)   12.69  (1.7%)  
-37.9% ( -44% -  -29%)
 MTQ   24.13  (9.3%)   19.32  (5.3%)  
-19.9% ( -31% -   -5%)
IntNRQ_1   17.21 (19.5%)   13.90  (7.7%)  
-19.2% ( -38% -9%)
   IntNRQ_10   13.49 (12.7%)   10.95  (5.7%)  
-18.8% ( -33% -0%)
   IntNRQ_508.85 (10.5%)7.40  (3.8%)  
-16.4% ( -27% -   -2%)
  MTQ_508.94  (8.3%)8.82  (4.4%)   
-1.3% ( -12% -   12%)
  MTQ_10   13.53 (12.6%)   14.64  (5.9%)
8.2% (  -9% -   30%)
   MTQ_1   18.88 (15.6%)   26.52 (14.2%)   
40.5% (   9% -   83%)

1024
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  IntNRQ   20.40  (7.7%)6.54  (1.5%)  
-67.9% ( -71% -  -63%)
IntNRQ_1   17.57 (17.2%)8.27  (2.9%)  
-52.9% ( -62% -  -39%)
   IntNRQ_10   13.66 (13.0%)6.72  (2.4%)  
-50.8% ( -58% -  -40%)
   IntNRQ_508.96 (10.4%)5.01  (1.5%)  
-44.1% ( -50% -  -35%)
 MTQ   24.41  (8.2%)   18.07  (4.4%)  
-26.0% ( -35% -  -14%)
  MTQ_509.05  (8.1%)8.65  (3.5%)   
-4.5% ( -14% -7%)
  MTQ_10   13.60 (11.5%)   14.41  (3.9%)
6.0% (  -8% -   24%)
   MTQ_1   19.11 (15.6%)   27.32 (12.9%)   
43.0% (  12% -   84%)
{noformat}

Rewriting to a BooleanQuery never helps when there is no filter, but something 
that the benchmark doesn't capture is that at least BooleanQuery does not 
allocate O(maxDoc) memory which can matter for large datasets.

When there are filters, it's more complicated, it depends on the density of the 
filter, on the number of terms and also apparently on how frequencies of the 
different terms compare (this is my current theory for why WildcardQuery 
performs better than NRQ).

Net/net I think this validates that 64 would be a good threshold to rewrite, 
with a minimum slowdown when filters are dense, and interesting speedups when 
filters are sparse?

> MultiTermQuery's FILTER rewrite method should support skipping whenever 
> possible
> 
>
> Key: LUCENE-6458
> URL: https://issues.apache.org/jira/browse/LUCENE-6458
> Project: Lucene - Core
>  Issue Type: Improvement
>

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2249 - Failure!

2015-05-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2249/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (43 > 40) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (43 > 40) - we 
expect it can happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([5FBF41E36556D271:D7EB7E39CBAABF89]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Created] (SOLR-7532) Nuke all traces of commitIntervalLowerBound from configs

2015-05-12 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-7532:
---

 Summary: Nuke all traces of commitIntervalLowerBound from configs
 Key: SOLR-7532
 URL: https://issues.apache.org/jira/browse/SOLR-7532
 Project: Solr
  Issue Type: Task
Reporter: Shalin Shekhar Mangar
Priority: Trivial
 Fix For: Trunk, 5.2


I noticed this first via the Config API which shows a config element called 
"commitIntervalLowerBound" under the updateHandler section. A quick search of 
this property shows it that it is not used anywhere. In fact, some of the old 
solrconfig.xml used by tests and the extraction contrib had the following to 
say about this property:

{code}

{code}

This is clearly not used so let's remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7531) Config API is merging certain key names together

2015-05-12 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-7531:
---

 Summary: Config API is merging certain key names together
 Key: SOLR-7531
 URL: https://issues.apache.org/jira/browse/SOLR-7531
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1, 5.0
Reporter: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2


Starting from a new Solr 5.0 install

{code}
./bin/solr start -e schemaless
curl 'http://localhost:8983/solr/gettingstarted/config' > config.json
{code}

Open config.json and note that there is a key called "autoCommmitMaxDocs" under 
the updateHandler section.

{code}
curl 'http://localhost:8983/solr/gettingstarted/config' -H 
'Content-type:application/json' -d '{"set-property" : 
{"updateHandler.autoCommit.maxDocs" : 5000}}'
curl 'http://localhost:8983/solr/gettingstarted/config' > config.json
{code}

Open config.json and note that both the value of updateHandler > autoCommit > 
maxDocs and updateHandler > autoCommitMaxDocs is now set to 5000



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6472) Add min and max document options to global ordinal join

2015-05-12 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539939#comment-14539939
 ] 

Martijn van Groningen commented on LUCENE-6472:
---

makes sense, I'll update the jdocs.

> Add min and max document options to global ordinal join
> ---
>
> Key: LUCENE-6472
> URL: https://issues.apache.org/jira/browse/LUCENE-6472
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Fix For: 5.2
>
> Attachments: LUCENE-6472.patch
>
>
> This feature allows to only match "to" documents that have at least between 
> min and max matching  "from" documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6479) Create utility to generate Classifier's confusion matrix

2015-05-12 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created LUCENE-6479:
---

 Summary: Create utility to generate Classifier's confusion matrix
 Key: LUCENE-6479
 URL: https://issues.apache.org/jira/browse/LUCENE-6479
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/classification
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


In order to debug and compare accuracy of {{Classifiers}} it's often useful to 
print the related [confusion 
matrix|http://en.wikipedia.org/wiki/Confusion_matrix] so it'd be good to 
provide such an utility class/method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6045) Refator classifier APIs to work better with multi threading

2015-05-12 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved LUCENE-6045.
-
Resolution: Fixed

> Refator classifier APIs to work better with multi threading
> ---
>
> Key: LUCENE-6045
> URL: https://issues.apache.org/jira/browse/LUCENE-6045
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: Trunk
>
>
> In 
> https://issues.apache.org/jira/browse/LUCENE-4345?focusedCommentId=13454729&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13454729
>  [~simonw] pointed out that the current Classifier API doesn't work well in 
> multi threading environments: 
> bq. The interface you defined has some problems with respect to 
> Multi-Threading IMO. The interface itself suggests that this class is 
> stateful and you have to call methods in a certain order and at the same you 
> need to make sure that it is not published for read access before training is 
> done. I think it would be wise to pass in all needed objects as constructor 
> arguments and make the references final so it can be shared across threads 
> and add an interface that represents the trained model computed offline? In 
> this case it doesn't really matter but in the future it might make sense. We 
> can also skip the model interface entirely and remove the training method 
> until we have some impls that really need to be trained.
> I missed that at that point but I think for 6.0 (?) it would be wise to 
> rearrange the API to address that properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7530) Wrong JSON response using Terms Component with distrib=true

2015-05-12 Thread JIRA
Raúl Grande created SOLR-7530:
-

 Summary: Wrong JSON response using Terms Component with 
distrib=true
 Key: SOLR-7530
 URL: https://issues.apache.org/jira/browse/SOLR-7530
 Project: Solr
  Issue Type: Bug
  Components: Response Writers, SearchComponents - other, SolrCloud
Affects Versions: 4.9
Reporter: Raúl Grande


When using TermsComponent in SolrCloud there are differences in the JSON 
response if parameter distrib is true or false. If distrib=true JSON is not 
well formed (please note at the [ ] marks)

JSON Response when distrib=false. Correct response:
{"responseHeader":{ 
"status":0, 
"QTime":3
}, 
"terms":{ 
"FileType":
[ 
"EMAIL",20060, 
"PDF",7051, 
"IMAGE",5108, 
"OFFICE",4912, 
"TXT",4405, 
"OFFICE_EXCEL",4122, 
"OFFICE_WORD",2468
]
} } 

JSON Response when distrib=true. Incorrect response:
{ 
"responseHeader":{
"status":0, 
"QTime":94
}, 
"terms":{ 
"FileType":{ 
"EMAIL":31923, 
"PDF":11545, 
"IMAGE":9807, 
"OFFICE_EXCEL":8195, 
"OFFICE":5147, 
"OFFICE_WORD":4820, 
"TIFF":1156, 
"XML":851, 
"HTML":821, 
"RTF":303
} 
} } 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7529) NullPointerException in RELOAD-command in CoreAdmin

2015-05-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539828#comment-14539828
 ] 

Shawn Heisey commented on SOLR-7529:


This is a lack of input validation - there should be a null check on at least 
the "core" parameter, and the resulting error needs to explain the problem.  
The RELOAD action is not the only one that is missing validation.

> NullPointerException in RELOAD-command in CoreAdmin
> ---
>
> Key: SOLR-7529
> URL: https://issues.apache.org/jira/browse/SOLR-7529
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Jellyfrog
>Priority: Minor
>
> http://solr.local:8983/solr/admin/cores?action=RELOAD
> {code}
> 
> 
> 500 name="QTime">1 name="trace">java.lang.NullPointerException
>   at java.util.TreeMap.getEntry(TreeMap.java:347)
>   at java.util.TreeMap.containsKey(TreeMap.java:232)
>   at java.util.TreeSet.contains(TreeSet.java:234)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:713)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:223)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:186)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:736)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:204)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>   at java.lang.Thread.run(Thread.java:745)
> 500
> 
> {code}
> http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java?view=markup#l768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7529) NullPointerException in RELOAD-command in CoreAdmin

2015-05-12 Thread Jellyfrog (JIRA)
Jellyfrog created SOLR-7529:
---

 Summary: NullPointerException in RELOAD-command in CoreAdmin
 Key: SOLR-7529
 URL: https://issues.apache.org/jira/browse/SOLR-7529
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Jellyfrog
Priority: Minor


http://solr.local:8983/solr/admin/cores?action=RELOAD
{code}



5001java.lang.NullPointerException
at java.util.TreeMap.getEntry(TreeMap.java:347)
at java.util.TreeMap.containsKey(TreeMap.java:232)
at java.util.TreeSet.contains(TreeSet.java:234)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:713)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:223)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:186)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
at 
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:736)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:204)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
500

{code}

http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java?view=markup#l768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6472) Add min and max document options to global ordinal join

2015-05-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539820#comment-14539820
 ] 

Adrien Grand commented on LUCENE-6472:
--

+1 Let's just make it explicit that the min/max parameters are included?

> Add min and max document options to global ordinal join
> ---
>
> Key: LUCENE-6472
> URL: https://issues.apache.org/jira/browse/LUCENE-6472
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Fix For: 5.2
>
> Attachments: LUCENE-6472.patch
>
>
> This feature allows to only match "to" documents that have at least between 
> min and max matching  "from" documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7513) Add Equalitors to Streaming Expressions

2015-05-12 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-7513:
--
Attachment: SOLR-7513.patch

Modified Equalitor interface to more closely mirror Java 8's BiPredicate. I'm 
not using BiPredicate because this should be back-ported into 5.2 and as such 
needs to be Java 7 compatible.

Depends on SOLR-7377, SOLR-7524, and SOLR-7528.

> Add Equalitors to Streaming Expressions
> ---
>
> Key: SOLR-7513
> URL: https://issues.apache.org/jira/browse/SOLR-7513
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
> Attachments: SOLR-7513.patch, SOLR-7513.patch
>
>
> Right now all streams use the Comparator interface to compare tuples. 
> The Comparator interface will tell you if tupleA is before, after, or equal 
> to tupleB. This is great for most streams as they use this logic when 
> combining multiple streams together. However, some streams only care about 
> the equality of two tuples and the less/greater than logic is unnecessary.
> This depends on SOLR-7377.
> This patch is to introduce a new interface into streaming expressions called 
> Equalitor which will return if two tuples are equal. The benefit here 
> is that the expressions for streams using Equalitor instead of Comparator can 
> omit the ordering part.
> {code}
> unique(somestream, over="fieldA asc, fieldB desc")
> {code}
> can become
> {code}
> unique(somestream, over="fieldA,fieldB")
> {code}
> The added benefit is that this will set us up with simplier expressions for 
> joins (hash, merge, inner, outer, etc...) as those only care about equality.
> By adding this as an interface we make no assumptions about what it means to 
> be equal, just that some implementation needs to exist adhering to the 
> Equalitor interface which will determine if two tuples are logically 
> equal. 
> We do define at least one concrete class which checks for equality but that 
> does not preclude others from adding additional concrete classes with their 
> own logic in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7274) Pluggable authentication module in Solr

2015-05-12 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539721#comment-14539721
 ] 

Ishan Chattopadhyaya commented on SOLR-7274:


I've used stuff from SOLR-7275, so it should be same format as Anshum mentions 
here:
https://issues.apache.org/jira/browse/SOLR-7275?focusedCommentId=14497128&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14497128

Something like:
{noformat}
{"authorization":
  {"class":"solr.SimpleSolrAuthorizationPlugin",
  "deny":["user1","user2"]
  },
  "authentication":
  {"class":"org.apache.solr.security.KerberosPlugin",
   "conf1": "val1", ...
  }
}
{noformat}

The kerberos plugin (SOLR-7468) doesn't require any other config than "class" 
at the moment. All the other config parameters are host specific and are picked 
up from system properties.

> Pluggable authentication module in Solr
> ---
>
> Key: SOLR-7274
> URL: https://issues.apache.org/jira/browse/SOLR-7274
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Anshum Gupta
> Attachments: SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, 
> SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch
>
>
> It would be good to have Solr support different authentication protocols.
> To begin with, it'd be good to have support for kerberos and basic auth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7528) Simplify Interfaces used in Streaming Expressions

2015-05-12 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-7528:
--
Attachment: SOLR-7528.patch

> Simplify Interfaces used in Streaming Expressions
> -
>
> Key: SOLR-7528
> URL: https://issues.apache.org/jira/browse/SOLR-7528
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk, 5.2
>Reporter: Dennis Gove
>Priority: Minor
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7528.patch
>
>
> FieldComparator and StreamComparator have been collapsed into a single class 
> StreamComparator. There was no need for a separate abstract class.
> Added null checks in StreamComparator. For now if both are null then they 
> will evaluate to equal. We can add a later enhancement under a new ticket to 
> make that configurable.
> Interfaces ExpressibleStream and ExpressibleComparator have been collapsed 
> into interface Expressible. They defined the same interface and there's no 
> reason to have separate interfaces for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7528) Simplify Interfaces used in Streaming Expressions

2015-05-12 Thread Dennis Gove (JIRA)
Dennis Gove created SOLR-7528:
-

 Summary: Simplify Interfaces used in Streaming Expressions
 Key: SOLR-7528
 URL: https://issues.apache.org/jira/browse/SOLR-7528
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: Trunk, 5.2
Reporter: Dennis Gove
Priority: Minor
 Fix For: Trunk, 5.2


FieldComparator and StreamComparator have been collapsed into a single class 
StreamComparator. There was no need for a separate abstract class.

Added null checks in StreamComparator. For now if both are null then they will 
evaluate to equal. We can add a later enhancement under a new ticket to make 
that configurable.

Interfaces ExpressibleStream and ExpressibleComparator have been collapsed into 
interface Expressible. They defined the same interface and there's no reason to 
have separate interfaces for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6803) Pivot Performance

2015-05-12 Thread Neil Ireson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539672#comment-14539672
 ] 

Neil Ireson commented on SOLR-6803:
---

I also made the naive change of removed the offending line from the code, by 
replacing

{code}
DocSet subset = getSubset(docs, sfield, fieldValue);
{code}
with
{code}
DocSet subset = null;
if ( subField != null || ((isShard || 0 < pivotCount) && ! 
statsFields.isEmpty()) ) {
  subset = getSubset(docs, sfield, fieldValue);
}
{code}
Just to show that in this case the pivot still provides the best results.

| Values |  Combined | Facet | Pivot |
| 100|   202 |   133 |67 |
| 1000   |   215 |   183 |73 |
| 1  |   255 |   392 |   145 |
| 10 |   464 |  1301 |   395 |
| 50 |  1307 |  4458 |  1179 |
| 100|  2471 |  7783 |  2148 |

Note that with this change the code passed all the compile tests, so it's still 
not clear why to me why getSubset has to be called every time. 


> Pivot Performance
> -
>
> Key: SOLR-6803
> URL: https://issues.apache.org/jira/browse/SOLR-6803
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Neil Ireson
>Priority: Minor
> Attachments: PivotPerformanceTest.java
>
>
> I found that my pivot search for terms per day was taking an age so I knocked 
> up a quick test, using a collection of 1 million documents with a different 
> number of random terms and times, to compare different ways of getting the 
> counts.
> 1) Combined = combining the term and time in a single field.
> 2) Facet = for each term set the query to the term and then get the time 
> facet 
> 3) Pivot = use the term/time pivot facet.
> The following two tables present the results for version 4.9.1 vs 4.10.1, as 
> an average of five runs.
> 4.9.1 (Processing time in ms)
> |Values (#)   |  Combined (ms)| Facet (ms)| Pivot (ms)|
> |100   |22|21|52|
> |1000  |   178|57|   115|
> |1 |  1363|   211|   310|
> |10|  2592|  1009|   978|
> |50|  3125|  3753|  2476|
> |100   |  3957|  6789|  3725|
> 4.10.1 (Processing time in ms)
> |Values (#)   |  Combined (ms)| Facet (ms)| Pivot (ms)|
> |100   |21|21|75|
> |1000  |   188|60|   265|
> |1 |  1438|   215|  1826|
> |10|  2768|  1073| 16594|
> |50|  3266|  3686| 99682|
> |100   |  4080|  6777|208873|
> The results show that, as the number of pivot values increases (i.e. number 
> of terms * number of times), pivot performance in 4.10.1 get progressively 
> worse.
> I tried to look at the code but there was a lot of changes in pivoting 
> between 4.9 and 4.10, and so it is not clear to me what has cause the 
> performance issues. However the results seem to indicate that if the pivot 
> was simply a combined facet search, it could potentially produce better and 
> more robust performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7527) Need count of facet.pivot on distinct combination

2015-05-12 Thread Nagabhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nagabhushan updated SOLR-7527:
--
Issue Type: Wish  (was: Task)

> Need count of facet.pivot on distinct combination
> -
>
> Key: SOLR-7527
> URL: https://issues.apache.org/jira/browse/SOLR-7527
> Project: Solr
>  Issue Type: Wish
>Reporter: Nagabhushan
>
> Hi I need to get action wise count in a campaign. Using 
> facet.pivot=campaignId,action to get it.
> Ex : campaignId,id,action
>1,1,a
>1,1,a
>1,2,a
>1,2,b
> When I do  facet.pivot I get {a:3,b:1}, Facet considers duplicate rows in 
> count.
> I need distinct by combination of campaignId,id,action which is {a:2,b:1}
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7527) Need count of facet.pivot on distinct combination

2015-05-12 Thread Nagabhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nagabhushan updated SOLR-7527:
--
Issue Type: Improvement  (was: Wish)

> Need count of facet.pivot on distinct combination
> -
>
> Key: SOLR-7527
> URL: https://issues.apache.org/jira/browse/SOLR-7527
> Project: Solr
>  Issue Type: Improvement
>Reporter: Nagabhushan
>
> Hi I need to get action wise count in a campaign. Using 
> facet.pivot=campaignId,action to get it.
> Ex : campaignId,id,action
>1,1,a
>1,1,a
>1,2,a
>1,2,b
> When I do  facet.pivot I get {a:3,b:1}, Facet considers duplicate rows in 
> count.
> I need distinct by combination of campaignId,id,action which is {a:2,b:1}
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7527) Need count of facet.pivot on distinct combination

2015-05-12 Thread Nagabhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nagabhushan updated SOLR-7527:
--
Priority: Major  (was: Trivial)

> Need count of facet.pivot on distinct combination
> -
>
> Key: SOLR-7527
> URL: https://issues.apache.org/jira/browse/SOLR-7527
> Project: Solr
>  Issue Type: Task
>Reporter: Nagabhushan
>
> Hi I need to get action wise count in a campaign. Using 
> facet.pivot=campaignId,action to get it.
> Ex : campaignId,id,action
>1,1,a
>1,1,a
>1,2,a
>1,2,b
> When I do  facet.pivot I get {a:3,b:1}, Facet considers duplicate rows in 
> count.
> I need distinct by combination of campaignId,id,action which is {a:2,b:1}
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7527) Need count of facet.pivot on distinct combination

2015-05-12 Thread Nagabhushan (JIRA)
Nagabhushan created SOLR-7527:
-

 Summary: Need count of facet.pivot on distinct combination
 Key: SOLR-7527
 URL: https://issues.apache.org/jira/browse/SOLR-7527
 Project: Solr
  Issue Type: Task
Reporter: Nagabhushan
Priority: Trivial


Hi I need to get action wise count in a campaign. Using 
facet.pivot=campaignId,action to get it.

Ex : campaignId,id,action
   1,1,a
   1,1,a
   1,2,a
   1,2,b

When I do  facet.pivot I get {a:3,b:1}, Facet considers duplicate rows in count.

I need distinct by combination of campaignId,id,action which is {a:2,b:1}

Thanks,




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-329) Fuzzy query scoring issues

2015-05-12 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-329:

Attachment: LUCENE-329.patch

Switched to the TermContext.accumulateStatistics() method Adrien suggested for 
tweaking stats.

> Fuzzy query scoring issues
> --
>
> Key: LUCENE-329
> URL: https://issues.apache.org/jira/browse/LUCENE-329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 1.2
> Environment: Operating System: All
> Platform: All
>Reporter: Mark Harwood
>Assignee: Mark Harwood
>Priority: Minor
> Fix For: 5.x
>
> Attachments: ASF.LICENSE.NOT.GRANTED--patch.txt, LUCENE-329.patch, 
> LUCENE-329.patch
>
>
> Queries which automatically produce multiple terms (wildcard, range, prefix, 
> fuzzy etc)currently suffer from two problems:
> 1) Scores for matching documents are significantly smaller than term queries 
> because of the volume of terms introduced (A match on query Foo~ is 0.1 
> whereas a match on query Foo is 1).
> 2) The rarer forms of expanded terms are favoured over those of more common 
> forms because of the IDF. When using Fuzzy queries for example, rare mis-
> spellings typically appear in results before the more common correct 
> spellings.
> I will attach a patch that corrects the issues identified above by 
> 1) Overriding Similarity.coord to counteract the downplaying of scores 
> introduced by expanding terms.
> 2) Taking the IDF factor of the most common form of expanded terms as the 
> basis of scoring all other expanded terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7526) When replacing index contents and reloading the core, index is deleted

2015-05-12 Thread Jellyfrog (JIRA)
Jellyfrog created SOLR-7526:
---

 Summary: When replacing index contents and reloading the core, 
index is deleted
 Key: SOLR-7526
 URL: https://issues.apache.org/jira/browse/SOLR-7526
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
 Environment: Ubuntu 14.04
Reporter: Jellyfrog


I want to replace the index with another index, so I replace the files and do a 
RELOAD, and the index disappears.

Steps to reproduce:
1. Create an empty core (no index/tlog)
2. Add it to Solr and let it autocreate the index-folders
{code}
$ find data/
data/
data/index
data/index/write.lock
data/index/segments_1
data/tlog
{code} 
3. Replace the index with another
{code}
$ find data/
data
data/index
data/index/write.lock
data/index/_tt_Lucene50_0.doc
data/index/segments_1
data/index/_tw.fdx
data/index/_ra.fdt
data/index/_tw.fnm
data/index/_ra.fdx
data/index/_tw.nvd
data/index/_ra.fnm
data/index/_tw.nvm
data/index/_ra.nvd
data/index/_tw.si
data/index/_ra.nvm
data/index/_tw.tvd
data/index/_ra.si
data/index/_tw.tvx
data/index/_ra.tvd
data/index/_tw_Lucene50_0.doc
data/index/_ra.tvx
data/index/_tt_Lucene50_0.tip
data/index/_ra_Lucene50_0.doc
data/index/_tt_Lucene50_0.tim
data/index/_ra_Lucene50_0.pos
data/index/_tu_Lucene50_0.tim
data/index/_ra_Lucene50_0.tim
data/index/_tu_Lucene50_0.pos
data/index/_ra_Lucene50_0.tip
data/index/_tw_Lucene50_0.pos
data/index/_tk.fdt
data/index/_tw_Lucene50_0.tim
data/index/_tk.fdx
data/index/_tw_Lucene50_0.tip
data/index/_tk.fnm
data/index/_tx.fdt
data/index/_tk.nvd
data/index/_tx.fdx
data/index/_tk.nvm
data/index/_tx.fnm
data/index/_tk.si
data/index/_tx.nvd
data/index/_tk.tvd
data/index/_tx.nvm
data/index/_tk.tvx
data/index/_tu_Lucene50_0.tip
data/index/_tk_Lucene50_0.doc
data/index/_tv.fdt
data/index/_tk_Lucene50_0.pos
data/index/_tv.fdx
data/index/_tk_Lucene50_0.tim
data/index/_tv.fnm
data/index/_tk_Lucene50_0.tip
data/index/_tx.si
data/index/_tr.fdt
data/index/_tx.tvd
data/index/_tr.fdx
data/index/_tx.tvx
data/index/_tr.fnm
data/index/_tx_Lucene50_0.doc
data/index/_tr.nvd
data/index/_tx_Lucene50_0.pos
data/index/_tr.nvm
data/index/_tx_Lucene50_0.tim
data/index/_tr.si
data/index/_tx_Lucene50_0.tip
data/index/_tr.tvd
data/index/_ty.fdt
data/index/_tr.tvx
data/index/_tv.nvd
data/index/_tr_Lucene50_0.doc
data/index/_tv.nvm
data/index/_tr_Lucene50_0.pos
data/index/_tv.si
data/index/_tr_Lucene50_0.tim
data/index/_tv.tvd
data/index/_tr_Lucene50_0.tip
data/index/_ty.fdx
data/index/_ts.fdt
data/index/_ty.fnm
data/index/_ts.fdx
data/index/_ty.nvd
data/index/_ts.fnm
data/index/_ty.nvm
data/index/_ts.nvd
data/index/_ty.si
data/index/_ts.nvm
data/index/_ty.tvd
data/index/_ts.si
data/index/_ty.tvx
data/index/_ts.tvd
data/index/_ty_Lucene50_0.doc
data/index/_ts.tvx
data/index/_tv.tvx
data/index/_ts_Lucene50_0.doc
data/index/_tv_Lucene50_0.doc
data/index/_ts_Lucene50_0.pos
data/index/_tv_Lucene50_0.pos
data/index/_ts_Lucene50_0.tim
data/index/_tv_Lucene50_0.tim
data/index/_ts_Lucene50_0.tip
data/index/_ty_Lucene50_0.pos
data/index/_tt.fdt
data/index/_ty_Lucene50_0.tim
data/index/_tt.fdx
data/index/_ty_Lucene50_0.tip
data/index/_tt.fnm
data/index/_tz.fdt
data/index/_tt.nvd
data/index/_tz.fdx
data/index/_tt.nvm
data/index/_tz.fnm
data/index/_tt.si
data/index/_tz.nvd
data/index/_tt.tvd
data/index/_tz.nvm
data/index/_tt.tvx
data/index/_tv_Lucene50_0.tip
data/index/_tw.fdt
data/index/_tt_Lucene50_0.pos
data/index/_tu.fdt
data/index/_tz.si
data/index/_tu.fdx
data/index/_tz.tvd
data/index/_tu.fnm
data/index/_tz.tvx
data/index/_tu.nvd
data/index/_tz_Lucene50_0.doc
data/index/_tu.nvm
data/index/_tz_Lucene50_0.pos
data/index/_tu.si
data/index/_tz_Lucene50_0.tim
data/index/_tu.tvd
data/index/_tz_Lucene50_0.tip
data/index/_tu.tvx
data/index/_tu_Lucene50_0.doc
data/index/_u0.fdt
data/index/_u0.fdx
data/index/_u0.fnm
data/index/_u0.nvd
data/index/_u0.nvm
data/index/_u0.si
data/index/_u0.tvd
data/index/_u0.tvx
data/index/_u0_Lucene50_0.doc
data/index/_u0_Lucene50_0.pos
data/index/_u0_Lucene50_0.tim
data/index/_u0_Lucene50_0.tip
data/index/_u1.fdt
data/index/_u1.fdx
data/index/_u1.fnm
data/index/_u1.nvd
data/index/_u1.nvm
data/index/_u1.si
data/index/_u1.tvd
data/index/_u1.tvx
data/index/_u1_Lucene50_0.doc
data/index/_u1_Lucene50_0.pos
data/index/_u1_Lucene50_0.tim
data/index/_u1_Lucene50_0.tip
data/index/_u2.fdt
data/index/_u2.fdx
data/index/_u2.fnm
data/index/_u2.nvd
data/index/_u2.nvm
data/index/_u2.si
data/index/_u2.tvd
data/index/_u2.tvx
data/index/_u2_Lucene50_0.doc
data/index/_u2_Lucene50_0.pos
data/index/_u2_Lucene50_0.tim
data/index/_u2_Lucene50_0.tip
data/index/_u3.fdt
data/index/_u3.fdx
data/index/_u3.fnm
data/index/_u3.nvd
data/index/_u3.nvm
data/index/_u3.si
data/index/_u3.tvd
data/index/_u3.tvx
data/index/_u3_Lucene50_0.doc
data/index/_u3_Lucene50_0.pos
data/index/_u3_Lucene50_0.tim
data/index/_u3_Lucene50_0.tip
data/index/_u4.fdt
data/index/_u4.fdx
data/index/_u4.fnm
data/index/_u4.nvd
data/

[jira] [Commented] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-05-12 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539533#comment-14539533
 ] 

Karl Wright commented on LUCENE-6450:
-

Thinking about this further, I think that the simplicity of the GeoPointField 
solution is based on two things:
(1) A good binary packing of a geohash value of a fixed depth
(2) The ability to reconstruct the lat/lon from the geohash value directly

For a Geo3d equivalent, nobody to date has come up with a decent (x,y,z) 
geohash with the right locality properties.  I'm not certain how important 
those locality properties actually *are* for GeoPointField, though.  If they 
are important, then effectively we'd need to pack the *same* lat/lon based 
geohash value that GeoPointField uses, but have a lightning fast way of 
converting it to (x,y,z).  A lookup table would suffice but would be huge at 
9mm resolution. :-)  Adjusting the resolution would be a potential solution.  
If locality is *not* needed, then really a "geohash" would be any workable 
fixed-resolution binary encoding of the (x,y,z) values of any given lat/lon.

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: Trunk, 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch, LUCENE-6450.patch, LUCENE-6450.patch, LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7274) Pluggable authentication module in Solr

2015-05-12 Thread Don Bosco Durai (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539509#comment-14539509
 ] 

Don Bosco Durai commented on SOLR-7274:
---

[~ichattopadhyaya], thanks for uploading the updated patch. Can you give the 
sample json for configuring the kerberos plugin?

Thanks

> Pluggable authentication module in Solr
> ---
>
> Key: SOLR-7274
> URL: https://issues.apache.org/jira/browse/SOLR-7274
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Anshum Gupta
> Attachments: SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, 
> SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch
>
>
> It would be good to have Solr support different authentication protocols.
> To begin with, it'd be good to have support for kerberos and basic auth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7275) Pluggable authorization module in Solr

2015-05-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7275:
---
Attachment: SOLR-7275.patch

Thanks for looking at this Noble. I've fixed a few things in this patch. 
Functionally, this patch also extracts collection list in case of a {{/select}} 
request that looks like:

/solr/collectionname/select?q=foo:bar&collection=anothercollection,yetanothercollection


> Pluggable authorization module in Solr
> --
>
> Key: SOLR-7275
> URL: https://issues.apache.org/jira/browse/SOLR-7275
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
> SOLR-7275.patch
>
>
> Solr needs an interface that makes it easy for different authorization 
> systems to be plugged into it. Here's what I plan on doing:
> Define an interface {{SolrAuthorizationPlugin}} with one single method 
> {{isAuthorized}}. This would take in a {{SolrRequestContext}} object and 
> return an {{SolrAuthorizationResponse}} object. The object as of now would 
> only contain a single boolean value but in the future could contain more 
> information e.g. ACL for document filtering etc.
> The reason why we need a context object is so that the plugin doesn't need to 
> understand Solr's capabilities e.g. how to extract the name of the collection 
> or other information from the incoming request as there are multiple ways to 
> specify the target collection for a request. Similarly request type can be 
> specified by {{qt}} or {{/handler_name}}.
> Flow:
> Request -> SolrDispatchFilter -> isAuthorized(context) -> Process/Return.
> {code}
> public interface SolrAuthorizationPlugin {
>   public SolrAuthorizationResponse isAuthorized(SolrRequestContext context);
> }
> {code}
> {code}
> public  class SolrRequestContext {
>   UserInfo; // Will contain user context from the authentication layer.
>   HTTPRequest request;
>   Enum OperationType; // Correlated with user roles.
>   String[] CollectionsAccessed;
>   String[] FieldsAccessed;
>   String Resource;
> }
> {code}
> {code}
> public class SolrAuthorizationResponse {
>   boolean authorized;
>   public boolean isAuthorized();
> }
> {code}
> User Roles: 
> * Admin
> * Collection Level:
>   * Query
>   * Update
>   * Admin
> Using this framework, an implementation could be written for specific 
> security systems e.g. Apache Ranger or Sentry. It would keep all the security 
> system specific code out of Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org