[jira] [Created] (SOLR-8429) add a flag blockUnauthenticated to BasicAutPlugin

2015-12-16 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8429:


 Summary: add a flag blockUnauthenticated to BasicAutPlugin
 Key: SOLR-8429
 URL: https://issues.apache.org/jira/browse/SOLR-8429
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul


If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to enable people to have 
minimal impact for users who only wishes to protect a couple of end points (say 
, collection admin and core admin)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060771#comment-15060771
 ] 

Paul Elschot commented on LUCENE-6922:
--

While testing this I noticed that on the trunk branch there are 37 .java files 
that are executable:
{code}find . -name '*.java' -executable | wc{code}
That is no problem here, but it is unusual.

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060811#comment-15060811
 ] 

Terry Smith commented on LUCENE-6922:
-

Does LUCENE-6933 affect this ticket?

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060838#comment-15060838
 ] 

Dawid Weiss commented on LUCENE-6922:
-

Absolutely! There is actually no promise that we will switch to git... it 
requires folks to agree to switch over. Paul's script is definitely a good 
interim solution until anything else emerges (infra fixes the problem with 
sync, we figure out what to do, etc.).

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8429) add a flag blockUnauthenticated to BasicAutPlugin

2015-12-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8429:
-
Description: 
If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to have minimal impact for 
users who only wishes to protect a couple of end points (say , collection admin 
and core admin)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 

  was:
If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to enable people to have 
minimal impact for users who only wishes to protect a couple of end points (say 
, collection admin and core admin)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 


> add a flag blockUnauthenticated to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a couple of end points (say , collection 
> admin and core admin)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8429) add a flag blockUnauthenticated to BasicAutPlugin

2015-12-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8429:
-
Description: 
If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to have minimal impact for 
users who only wishes to protect a few end points (say , collection admin and 
core admin only)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 

  was:
If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to have minimal impact for 
users who only wishes to protect a couple of end points (say , collection admin 
and core admin)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 


> add a flag blockUnauthenticated to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060801#comment-15060801
 ] 

Paul Elschot commented on LUCENE-6922:
--

After the git-svn mirror is turned off, this could be used by group of people 
to push the resulting commits to public repos. Fetching these commits 
automatically at the start of the script would then allow to reuse this earlier 
work, but such automatic fetching and pushing still needs to be added.

In case the resulting branches are not the same, such fetching and pushing 
would fail, which is actually a nice check that everyone generated the same 
commits.

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-
Description: 
This ticket builds on the existing AnalyticsQuery / MergeStrategy framework by 
adding the abstract class IterativeMergeStrategy,  which has built-in support 
for call-backs to the shards. The IterativeMergeStrategy is designed to support 
the execution of Parallel iterative Algorithms, such as Gradient Descent, 
inside of Solr.

To use the IterativeMergeStrategy you extend it and implement process(). This 
gives you access to the callback() method which makes it easy to make repeated 
calls to all the shards and run algorithms that require iteration.

Below is an example of a class that extends IterativeMergeStrategy. In this 
example it collects the *count* from the shards and then calls back to shards 
executing the *!count* AnalyticsQuery and sending it merged counts from all the 
shards. 

{code}

class TestIterative extends IterativeMergeStrategy  {

public void process(ResponseBuilder rb, ShardRequest sreq) throws Exception 
{
  int count = 0;
  for(ShardResponse shardResponse : sreq.responses) {
NamedList response = shardResponse.getSolrResponse().getResponse();
NamedList analytics = (NamedList)response.get("analytics");
Integer c = (Integer)analytics.get("mycount");
count += c.intValue();
  }

  ModifiableSolrParams params = new ModifiableSolrParams();
  params.add("distrib", "false");
  params.add("fq","{!count base="+count+"}");
  params.add("q","*:*");


  /*
  *  Call back to all the shards in the response and process the result.
   */

  QueryRequest request = new QueryRequest(params);
  List futures = callBack(sreq.responses, request);

  int nextCount = 0;

  for(Future future : futures) {
QueryResponse response = future.get().getResponse();
NamedList analytics = 
(NamedList)response.getResponse().get("analytics");
Integer c = (Integer)analytics.get("mycount");
nextCount += c.intValue();
  }

  NamedList merged = new NamedList();
  merged.add("mycount", nextCount);
  rb.rsp.add("analytics", merged);
}
  }

{code}



  was:
This ticket builds on the existing AnalyticsQuery / MergeStrategy framework by 
adding the abstract class IterativeMergeStrategy,  which has built-in support 
for call-backs to the shards. The IterativeMergeStrategy is designed to support 
the execution of Parallel iterative Algorithms, such as Gradient Descent, 
inside of Solr.

To use the IterativeMergeStrategy you extend it and implement process(). This 
gives you access to the callback() method which makes it easy to make repeated 
calls to all the shards and run algorithms that require iteration.

Below is an example of a class that extends IterativeMergeStrategy. In this 
example it collects the *count* from the shards and then calls back to shards 
executing *!count* sending it merged counts from all the shards. 

{code}

class TestIterative extends IterativeMergeStrategy  {

public void process(ResponseBuilder rb, ShardRequest sreq) throws Exception 
{
  int count = 0;
  for(ShardResponse shardResponse : sreq.responses) {
NamedList response = shardResponse.getSolrResponse().getResponse();
NamedList analytics = (NamedList)response.get("analytics");
Integer c = (Integer)analytics.get("mycount");
count += c.intValue();
  }

  ModifiableSolrParams params = new ModifiableSolrParams();
  params.add("distrib", "false");
  params.add("fq","{!count base="+count+"}");
  params.add("q","*:*");


  /*
  *  Call back to all the shards in the response and process the result.
   */

  QueryRequest request = new QueryRequest(params);
  List futures = callBack(sreq.responses, request);

  int nextCount = 0;

  for(Future future : futures) {
QueryResponse response = future.get().getResponse();
NamedList analytics = 
(NamedList)response.getResponse().get("analytics");
Integer c = (Integer)analytics.get("mycount");
nextCount += c.intValue();
  }

  NamedList merged = new NamedList();
  merged.add("mycount", nextCount);
  rb.rsp.add("analytics", merged);
}
  }

{code}




> Add IterativeMergeStrategy to support Parallel Iterative Algorithms
> ---
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / 

[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060299#comment-15060299
 ] 

Erick Erickson commented on SOLR-8423:
--

I don't think this breaks back-compat 'cause I don't see any promises in the 
docs ;) But I vaguely recall that this was to clean up stale cluster states 
without having to directly edit the ZK node, so maybe nobody thought about it?

I might advocate an option to keep at least the data directory just on general 
principles here though. I'm thinking of the case where the user is explicitly 
controlling routing, say time-series data. Does it make sense to delete/create 
shards to hide/show some time-interval? I have to admit this is a _theoretical_ 
use-case, haven't seen anyone actually ask for it so feel free to ignore...

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8424) Admin UI not reachable in Solr 5.4 under path /solr/ when running in JBoss

2015-12-16 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-8424:
---
Summary: Admin UI not reachable in Solr 5.4 under path /solr/ when running 
in JBoss  (was: Admin UI not reachable in Solr 5.4 under path /solr/)

> Admin UI not reachable in Solr 5.4 under path /solr/ when running in JBoss
> --
>
> Key: SOLR-8424
> URL: https://issues.apache.org/jira/browse/SOLR-8424
> Project: Solr
>  Issue Type: Bug
>  Components: UI, web gui
>Affects Versions: 5.4
> Environment: JBosss AS
>Reporter: Christian Danninger
>Priority: Minor
>  Labels: UI, admin-interface, jboss-as7
>
> In method org.apache.solr.servlet.LoadAdminUiServlet.doGet
> request.getRequestURI().substring(request.getContextPath().length())
> will return "" using JBoss. In Version 5.2.1 there was a hard coded value 
> "/admin.html". This leads to http 404.
> UI is reachable by path /solr/admin.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3.2 bug fix release

2015-12-16 Thread Upayavira
Why don't people just upgrade to 5.4? Why do we need another release in
the 5.3.x range?

Upayavira

On Wed, Dec 16, 2015, at 09:12 PM, Shawn Heisey wrote:
> On 12/16/2015 1:08 PM, Anshum Gupta wrote:
> > There are a bunch of important bug fixes that call for a 5.3.2 in my
> > opinion. I'm specifically talking about security plugins related fixes
> > but I'm sure there are others too.
> >
> > Unless someone else wants to do it, I'd volunteer to do the release
> > and cut an RC next Tuesday.
> 
> Sounds like a reasonable idea to me.  I assume these must be fixes that
> are not yet backported.
> 
> I happen to have the 5.3 branch on my dev system, with SOLR-6188
> applied.  It is already up to date.  There's nothing in the 5.3.2
> section of either CHANGES.txt file.  The svn log indicates that nothing
> has been backported since the 5.3.1 release was cut.
> 
> Perhaps SOLR-6188 could be added to the list of fixes to backport.  I
> believe it's a benign change.
> 
> Thinking about CHANGES.txt, this might work for the 5.3 branch:
> 
> 
> === Lucene 5.3.2 ===
> All changes were backported from 5.4.0.
> 
> Bug Fixes
> 
> * LUCENE-: A description (Committer Name)
> 
> 
> If we decide it's a good idea to mention the release in trunk and
> branch_5x, something like the following might work, because that file
> should already contain the full change descriptions:
> 
> 
> === Lucene 5.3.2 ===
> The following issues were backported from 5.4.0:
> LUCENE-
> LUCENE-
> 
> 
> Thanks,
> Shawn
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8393) Component for Solr resource usage planning

2015-12-16 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-8393:
---
Attachment: SOLR-8393.patch

Adjust to work with small and empty indexes, also add num docs used for 
estimation in results.

> Component for Solr resource usage planning
> --
>
> Key: SOLR-8393
> URL: https://issues.apache.org/jira/browse/SOLR-8393
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
> Attachments: SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, 
> SOLR-8393.patch, SOLR-8393.patch
>
>
> One question that keeps coming back is how much disk and RAM do I need to run 
> Solr. The most common response is that it highly depends on your data. While 
> true, it makes for frustrated users trying to plan their deployments. 
> The idea I'm bringing is to create a new component that will attempt to 
> extrapolate resources needed in the future by looking at resources currently 
> used. By adding a parameter for the target number of documents, current 
> resources are adapted by a ratio relative to current number of documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6398) Add IterativeMergeStrategy to support running Parallel Iterative Algorithms inside of Solr

2015-12-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060664#comment-15060664
 ] 

ASF subversion and git services commented on SOLR-6398:
---

Commit 1720422 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1720422 ]

SOLR-6398: Add IterativeMergeStrategy to support running Parallel Iterative 
Algorithms inside of Solr

> Add IterativeMergeStrategy to support running Parallel Iterative Algorithms 
> inside of Solr
> --
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
> by adding the abstract class IterativeMergeStrategy,  which has built-in 
> support for call-backs to the shards. The IterativeMergeStrategy is designed 
> to support the execution of Parallel iterative Algorithms, such as Gradient 
> Descent, inside of Solr.
> To use the IterativeMergeStrategy you extend it and implement process(). This 
> gives you access to the callback() method which makes it easy to make 
> repeated calls to all the shards and run algorithms that require iteration.
> Below is an example of a class that extends IterativeMergeStrategy. In this 
> example it collects the *count* from the shards and then calls back to shards 
> executing the *!count* AnalyticsQuery and sending it merged counts from all 
> the shards. 
> {code}
> class TestIterative extends IterativeMergeStrategy  {
> public void process(ResponseBuilder rb, ShardRequest sreq) throws 
> Exception {
>   int count = 0;
>   for(ShardResponse shardResponse : sreq.responses) {
> NamedList response = shardResponse.getSolrResponse().getResponse();
> NamedList analytics = (NamedList)response.get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> count += c.intValue();
>   }
>   ModifiableSolrParams params = new ModifiableSolrParams();
>   params.add("distrib", "false");
>   params.add("fq","{!count base="+count+"}");
>   params.add("q","*:*");
>   /*
>   *  Call back to all the shards in the response and process the result.
>*/
>   QueryRequest request = new QueryRequest(params);
>   List futures = callBack(sreq.responses, request);
>   int nextCount = 0;
>   for(Future future : futures) {
> QueryResponse response = future.get().getResponse();
> NamedList analytics = 
> (NamedList)response.getResponse().get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> nextCount += c.intValue();
>   }
>   NamedList merged = new NamedList();
>   merged.add("mycount", nextCount);
>   rb.rsp.add("analytics", merged);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8428) create an 'all' permission to protect all paths

2015-12-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8428:
-
Attachment: SOLR-8428.patch

> create an 'all' permission to protect all paths
> ---
>
> Key: SOLR-8428
> URL: https://issues.apache.org/jira/browse/SOLR-8428
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8428.patch
>
>
> There should be a well-known permission called all which can include all 
> paths served by Solr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



5.3.2 bug fix release

2015-12-16 Thread Anshum Gupta
There are a bunch of important bug fixes that call for a 5.3.2 in my
opinion. I'm specifically talking about security plugins related fixes but
I'm sure there are others too.

Unless someone else wants to do it, I'd volunteer to do the release and cut
an RC next Tuesday.

-- 
Anshum Gupta


[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060823#comment-15060823
 ] 

Dawid Weiss commented on LUCENE-6922:
-

LUCENE-6933 is an attempt to clean up and consolidate all the SVN history and, 
eventually, move to git entirely.

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8422:
-
Attachment: SOLR-8422.patch

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4=unid,sequence,folderunid=xml=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4=10=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6398) Add IterativeMergeStrategy to support running Parallel Iterative Algorithms inside of Solr

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-6398.

Resolution: Fixed

> Add IterativeMergeStrategy to support running Parallel Iterative Algorithms 
> inside of Solr
> --
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
> by adding the abstract class IterativeMergeStrategy,  which has built-in 
> support for call-backs to the shards. The IterativeMergeStrategy is designed 
> to support the execution of Parallel iterative Algorithms, such as Gradient 
> Descent, inside of Solr.
> To use the IterativeMergeStrategy you extend it and implement process(). This 
> gives you access to the callback() method which makes it easy to make 
> repeated calls to all the shards and run algorithms that require iteration.
> Below is an example of a class that extends IterativeMergeStrategy. In this 
> example it collects the *count* from the shards and then calls back to shards 
> executing the *!count* AnalyticsQuery and sending it merged counts from all 
> the shards. 
> {code}
> class TestIterative extends IterativeMergeStrategy  {
> public void process(ResponseBuilder rb, ShardRequest sreq) throws 
> Exception {
>   int count = 0;
>   for(ShardResponse shardResponse : sreq.responses) {
> NamedList response = shardResponse.getSolrResponse().getResponse();
> NamedList analytics = (NamedList)response.get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> count += c.intValue();
>   }
>   ModifiableSolrParams params = new ModifiableSolrParams();
>   params.add("distrib", "false");
>   params.add("fq","{!count base="+count+"}");
>   params.add("q","*:*");
>   /*
>   *  Call back to all the shards in the response and process the result.
>*/
>   QueryRequest request = new QueryRequest(params);
>   List futures = callBack(sreq.responses, request);
>   int nextCount = 0;
>   for(Future future : futures) {
> QueryResponse response = future.get().getResponse();
> NamedList analytics = 
> (NamedList)response.getResponse().get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> nextCount += c.intValue();
>   }
>   NamedList merged = new NamedList();
>   merged.add("mycount", nextCount);
>   rb.rsp.add("analytics", merged);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2891 - Failure!

2015-12-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2891/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Could not load collection from ZK:c8n_crud_1x2

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from 
ZK:c8n_crud_1x2
at 
__randomizedtesting.SeedInfo.seed([73CEC68293C3B26C:FB9AF9583D3FDF94]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1023)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:556)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:190)
at 
org.apache.solr.common.cloud.ClusterState.getLeader(ClusterState.java:96)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeader(ZkStateReader.java:617)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:638)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:629)
at 
org.apache.solr.cloud.HttpPartitionTest.testLeaderInitiatedRecoveryCRUD(HttpPartitionTest.java:136)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060830#comment-15060830
 ] 

Terry Smith commented on LUCENE-6922:
-

Ah, so I could consider this as a backup plan until LUCENE-6933 is ready?


> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060845#comment-15060845
 ] 

Paul Elschot commented on LUCENE-6922:
--

LUCENE-6933 could affect this in the sense of the choice of the starting point 
to generate a git branch. The script here uses the latest git-svn-id: commit 
for that, but that could be changed.

In case git-svn picks up again, the script currently restarts the generated 
branch, which is why the generated branch is called temporary and should 
normally only be used locally. However when the git-svn mirror is completely 
turned off, that temporary character will disappear.

I don't know how git-svn is doing for LUCENE-6933. In case git-svn runs into 
problems, this script might be used instead, with a few changes of course. The 
script is now mainly for local use.

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3.2 bug fix release

2015-12-16 Thread Shawn Heisey
On 12/16/2015 1:08 PM, Anshum Gupta wrote:
> There are a bunch of important bug fixes that call for a 5.3.2 in my
> opinion. I'm specifically talking about security plugins related fixes
> but I'm sure there are others too.
>
> Unless someone else wants to do it, I'd volunteer to do the release
> and cut an RC next Tuesday.

Sounds like a reasonable idea to me.  I assume these must be fixes that
are not yet backported.

I happen to have the 5.3 branch on my dev system, with SOLR-6188
applied.  It is already up to date.  There's nothing in the 5.3.2
section of either CHANGES.txt file.  The svn log indicates that nothing
has been backported since the 5.3.1 release was cut.

Perhaps SOLR-6188 could be added to the list of fixes to backport.  I
believe it's a benign change.

Thinking about CHANGES.txt, this might work for the 5.3 branch:


=== Lucene 5.3.2 ===
All changes were backported from 5.4.0.

Bug Fixes

* LUCENE-: A description (Committer Name)


If we decide it's a good idea to mention the release in trunk and
branch_5x, something like the following might work, because that file
should already contain the full change descriptions:


=== Lucene 5.3.2 ===
The following issues were backported from 5.4.0:
LUCENE-
LUCENE-


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060890#comment-15060890
 ] 

Paul Elschot commented on LUCENE-6922:
--

I to get an impression of what this is currently doing please fetch the 
trunk.svn.20151216 tag into an existing lucene-solr repository:

{code}git fetch https://github.com/PaulElschot/lucene-solr.git 
trunk.svn.20151216{code}


> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060890#comment-15060890
 ] 

Paul Elschot edited comment on LUCENE-6922 at 12/16/15 9:17 PM:


To get an impression of what this is currently doing please fetch the 
trunk.svn.20151216 tag into an existing lucene-solr repository:

{code}git fetch https://github.com/PaulElschot/lucene-solr.git 
trunk.svn.20151216{code}



was (Author: paul.elsc...@xs4all.nl):
I to get an impression of what this is currently doing please fetch the 
trunk.svn.20151216 tag into an existing lucene-solr repository:

{code}git fetch https://github.com/PaulElschot/lucene-solr.git 
trunk.svn.20151216{code}


> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8424) Admin UI not reachable in Solr 5.4 under path /solr/ when running in JBoss

2015-12-16 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey closed SOLR-8424.
--
Resolution: Cannot Reproduce

Running in third-party containers is no longer officially supported as of 
version 5.0.

Although you still CAN run this way, any problems encountered will only be 
fixed if they are problems with the included Jetty install.  Users are strongly 
encouraged to use the included scripts and container.

More detailed info can be found here:
https://wiki.apache.org/solr/WhyNoWar#preview

I cannot reproduce this problem when using "bin/solr start" to run Solr 5.4.0 
with the included Jetty container.  All relevant URL combinations appear to 
resolve correctly and display the admin UI.

Your JBoss configuration solution will remain here for others to find.

> Admin UI not reachable in Solr 5.4 under path /solr/ when running in JBoss
> --
>
> Key: SOLR-8424
> URL: https://issues.apache.org/jira/browse/SOLR-8424
> Project: Solr
>  Issue Type: Bug
>  Components: UI, web gui
>Affects Versions: 5.4
> Environment: JBosss AS
>Reporter: Christian Danninger
>Priority: Minor
>  Labels: UI, admin-interface, jboss-as7
>
> In method org.apache.solr.servlet.LoadAdminUiServlet.doGet
> request.getRequestURI().substring(request.getContextPath().length())
> will return "" using JBoss. In Version 5.2.1 there was a hard coded value 
> "/admin.html". This leads to http 404.
> UI is reachable by path /solr/admin.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6922) Improve svn to git workaround script

2015-12-16 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-6922:
-
Attachment: svnBranchToGit.py

Script of 16 December 2015. This was a major overhaul:

Added a verifyGitFilesAgainstSvn function that is only called when the local 
git repo is completely up to date with the remote svn repository. This takes 
almost 20 seconds here, but it might be made faster.

Work by checking out svn revisions instead of by asking svn to provide a patch 
for each revision. This is a nice speed up (to about 1 second per commit now), 
and it is also much easier to handle binary files.

Added setting file protection bits from svn to git for the files that are 
changed. Svn properties are still completely ignored, except that when there is 
property change, the file protection bits are also set.

Made the script work under both python 2 and python 3.


> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060419#comment-15060419
 ] 

Mark Miller commented on SOLR-8416:
---

Thanks for the patch, a couple comments:

* Usually it's better to use SolrException over RuntimeException
* Where you catch the interrupted exception, you should restore the interrupted 
status.
* The failure exception should probably give detail on which replicas were not 
found to be live and ACTIVE.
* Should not use hard coded 'active' but the relevant constants.
* Should probably check if the replicas node is listed under live nodes as well 
as if it's active?

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2946 - Failure!

2015-12-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2946/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([CEF60B09287B8E53]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([CEF60B09287B8E53]:0)




Build Log:
[...truncated 11304 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_CEF60B09287B8E53-001/init-core-data-001
   [junit4]   2> 2059626 INFO  
(SUITE-HttpPartitionTest-seed#[CEF60B09287B8E53]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 2059629 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2059634 INFO  (Thread-4787) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2059634 INFO  (Thread-4787) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2059730 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.ZkTestServer start zk server on port:56809
   [junit4]   2> 2059731 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2059732 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2059922 INFO  (zkCallback-1409-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@590749fc 
name:ZooKeeperConnection Watcher:127.0.0.1:56809 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2059923 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 2059923 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 2059923 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 2059930 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2059934 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2059935 INFO  (zkCallback-1410-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@20897925 
name:ZooKeeperConnection Watcher:127.0.0.1:56809/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2059935 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 2059936 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 2059936 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1
   [junit4]   2> 2059956 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1/shards
   [junit4]   2> 2059960 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/control_collection
   [junit4]   2> 2059967 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/control_collection/shards
   [junit4]   2> 2059975 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 2059980 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.c.SolrZkClient makePath: /configs/conf1/solrconfig.xml
   [junit4]   2> 2060010 INFO  
(TEST-HttpPartitionTest.test-seed#[CEF60B09287B8E53]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 2060011 INFO  

[jira] [Commented] (SOLR-6398) Add IterativeMergeStrategy to support running Parallel Iterative Algorithms inside of Solr

2015-12-16 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060463#comment-15060463
 ] 

Joel Bernstein commented on SOLR-6398:
--

This is looking ready to commit to trunk I believe. I'll be experimenting with 
this framework in next couples weeks with gradient descent and logisitic 
regression modeling.

> Add IterativeMergeStrategy to support running Parallel Iterative Algorithms 
> inside of Solr
> --
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
> by adding the abstract class IterativeMergeStrategy,  which has built-in 
> support for call-backs to the shards. The IterativeMergeStrategy is designed 
> to support the execution of Parallel iterative Algorithms, such as Gradient 
> Descent, inside of Solr.
> To use the IterativeMergeStrategy you extend it and implement process(). This 
> gives you access to the callback() method which makes it easy to make 
> repeated calls to all the shards and run algorithms that require iteration.
> Below is an example of a class that extends IterativeMergeStrategy. In this 
> example it collects the *count* from the shards and then calls back to shards 
> executing the *!count* AnalyticsQuery and sending it merged counts from all 
> the shards. 
> {code}
> class TestIterative extends IterativeMergeStrategy  {
> public void process(ResponseBuilder rb, ShardRequest sreq) throws 
> Exception {
>   int count = 0;
>   for(ShardResponse shardResponse : sreq.responses) {
> NamedList response = shardResponse.getSolrResponse().getResponse();
> NamedList analytics = (NamedList)response.get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> count += c.intValue();
>   }
>   ModifiableSolrParams params = new ModifiableSolrParams();
>   params.add("distrib", "false");
>   params.add("fq","{!count base="+count+"}");
>   params.add("q","*:*");
>   /*
>   *  Call back to all the shards in the response and process the result.
>*/
>   QueryRequest request = new QueryRequest(params);
>   List futures = callBack(sreq.responses, request);
>   int nextCount = 0;
>   for(Future future : futures) {
> QueryResponse response = future.get().getResponse();
> NamedList analytics = 
> (NamedList)response.getResponse().get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> nextCount += c.intValue();
>   }
>   NamedList merged = new NamedList();
>   merged.add("mycount", nextCount);
>   rb.rsp.add("analytics", merged);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-8422:


Assignee: Noble Paul

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4=unid,sequence,folderunid=xml=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4=10=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-
Description: 
This ticket builds on the existing AnalyticsQuery / MergeStrategy framework by 
adding the abstract class IterativeMergeStrategy,  which has built-in support 
for call-backs to the shards. The IterativeMergeStrategy is designed to support 
the execution of Parallel iterative Algorithms, such as Gradient Descent, 
inside of Solr.

To use the IterativeMergeStrategy you extend it and implement process(). This 
gives you access to the callback() method which makes it easy to make repeated 
calls to all the shards and run algorithms that require iteration.




  was:
This ticket builds on the existing AnalyticsQuery / MergeStrategy framework by 
adding the abstract class IterativeMergeStrategy,  which has built-in support 
for call-backs to the shards. The IterativeMergeStrategy is designed to support 
the execution of Parallel iterative Algorithms, such as Gradient Descent, 
inside of Solr.





> Add IterativeMergeStrategy to support Parallel Iterative Algorithms
> ---
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
> by adding the abstract class IterativeMergeStrategy,  which has built-in 
> support for call-backs to the shards. The IterativeMergeStrategy is designed 
> to support the execution of Parallel iterative Algorithms, such as Gradient 
> Descent, inside of Solr.
> To use the IterativeMergeStrategy you extend it and implement process(). This 
> gives you access to the callback() method which makes it easy to make 
> repeated calls to all the shards and run algorithms that require iteration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JSON "fields" vs defaults

2015-12-16 Thread Ryan Josal
To respond to your last question Yonik, the "fl" in the query params
behaves the same as the "fl" in the JSON params block, so no bug there.
There is a difference though, between those and the top level "fields"
param, as Jack pointed out it appends to the defaults instead of overriding
them like "fl" does.  To me it seems natural for fields to behave like fl,
and I also agree that "filter" overriding defaults is a more questionable
use case.  But it's good to bring up to consider for consistency with fq.

On Tuesday, December 15, 2015, Jack Krupansky 
wrote:

> In a normal query multiple fl parameters are additive, but they
> collectively override whatever fl parameter(s) may have been specified in
> "defaults", right? I mean, that's why Solr has "appends" in addition to
> "defaults", right"?
>
> Ah, but I see in the JSON Request API doc that it says "Multi-valued
> elements like fields and filter are appended", seeming to imply that the
> "defaults" section will be treated as if it were "appends", it would seem,
> at least for how "fields" is treated.
>
> See:
> https://cwiki.apache.org/confluence/display/solr/JSON+Request+API
>
> Filter seems to make sense for this auto-appends mode, but fields/fl don't
> seem to benefit from appending rather than treating the defaults section in
> the traditional manner, I think.
>
> -- Jack Krupansky
>
> On Tue, Dec 15, 2015 at 9:06 PM, Yonik Seeley  > wrote:
>
>> Multiple "fl" parameters are additive, so it would make sense that
>> "fields" is also (for fl and field in the same request).  If that's
>> true for "fl" as a default and "fl" as a query param, then it seems
>> like that should be true for the other variants.
>>
>> If "fl" as a query param and "fl" in a JSON params block don't act the
>> same, that should probably be a bug?
>>
>> -Yonik
>>
>>
>> On Tue, Dec 15, 2015 at 7:55 PM, Jack Krupansky
>> > > wrote:
>> > Yonik? The doc is weak in this area. In fact, I see a comment on it from
>> > Cassandra directed to you to verify the JSON to parameter mapping. It
>> would
>> > be nice to have a clear statement of the semantics for JSON "fields"
>> > parameter and how it may or may not interact with the Solr fl parameter.
>> >
>> > -- Jack Krupansky
>> >
>> > On Thu, Dec 10, 2015 at 3:55 PM, Ryan Josal > > wrote:
>> >>
>> >> I didn't see a Jira open in this, so I wanted to see if it's expected.
>> If
>> >> you pass "fields":[...] in a SOLR JSON API request, it does not
>> override
>> >> what's the default in the handler config.  I had fl=* as a default, so
>> I saw
>> >> "fields" have no effect, while "params":{"fl":...} worked as expected.
>> >> After stepping through the debugger I noticed it was just appending
>> "fields"
>> >> at the end of everything else (including after solr config appends, if
>> it
>> >> makes a difference).
>> >>
>> >> If this is not expected I will create a Jira and maybe have time to
>> >> provide a patch.
>> >>
>> >> Ryan
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> 
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
>>
>>
>


[jira] [Updated] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-16 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8416:
--
Attachment: (was: CDH-33586.patch)

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-16 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8416:
--
Attachment: CDH-33586.patch

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-
Description: 
This ticket builds on the existing AnalyticsQuery / MergeStrategy framework by 
adding the abstract class IterativeMergeStrategy,  which has built-in support 
for call-backs to the shards. The IterativeMergeStrategy is designed to support 
the execution of Parallel iterative Algorithms, such as Gradient Descent, 
inside of Solr.

To use the IterativeMergeStrategy you extend it and implement process(). This 
gives you access to the callback() method which makes it easy to make repeated 
calls to all the shards and run algorithms that require iteration.

Below is an example of a class that extends IterativeMergeStrategy. In this 
example it collects the *count* from the shards and then calls back to shards 
executing *!count* sending it merged counts from all the shards. 

{code}

class TestIterative extends IterativeMergeStrategy  {

public void process(ResponseBuilder rb, ShardRequest sreq) throws Exception 
{
  int count = 0;
  for(ShardResponse shardResponse : sreq.responses) {
NamedList response = shardResponse.getSolrResponse().getResponse();
NamedList analytics = (NamedList)response.get("analytics");
Integer c = (Integer)analytics.get("mycount");
count += c.intValue();
  }

  ModifiableSolrParams params = new ModifiableSolrParams();
  params.add("distrib", "false");
  params.add("fq","{!count base="+count+"}");
  params.add("q","*:*");


  /*
  *  Call back to all the shards in the response and process the result.
   */

  QueryRequest request = new QueryRequest(params);
  List futures = callBack(sreq.responses, request);

  int nextCount = 0;

  for(Future future : futures) {
QueryResponse response = future.get().getResponse();
NamedList analytics = 
(NamedList)response.getResponse().get("analytics");
Integer c = (Integer)analytics.get("mycount");
nextCount += c.intValue();
  }

  NamedList merged = new NamedList();
  merged.add("mycount", nextCount);
  rb.rsp.add("analytics", merged);
}
  }

{code}



  was:
This ticket builds on the existing AnalyticsQuery / MergeStrategy framework by 
adding the abstract class IterativeMergeStrategy,  which has built-in support 
for call-backs to the shards. The IterativeMergeStrategy is designed to support 
the execution of Parallel iterative Algorithms, such as Gradient Descent, 
inside of Solr.

To use the IterativeMergeStrategy you extend it and implement process(). This 
gives you access to the callback() method which makes it easy to make repeated 
calls to all the shards and run algorithms that require iteration.

Below is as example of a class that extends IterativeMergeStrategy. In this 
example it collects the *count* from the shards and then calls back to shards 
executing *!count* sending it merged counts from all the shards. 

{code}

class TestIterative extends IterativeMergeStrategy  {

public void process(ResponseBuilder rb, ShardRequest sreq) throws Exception 
{
  int count = 0;
  for(ShardResponse shardResponse : sreq.responses) {
NamedList response = shardResponse.getSolrResponse().getResponse();
NamedList analytics = (NamedList)response.get("analytics");
Integer c = (Integer)analytics.get("mycount");
count += c.intValue();
  }

  ModifiableSolrParams params = new ModifiableSolrParams();
  params.add("distrib", "false");
  params.add("fq","{!count base="+count+"}");
  params.add("q","*:*");


  /*
  *  Call back to all the shards in the response and process the result.
   */

  QueryRequest request = new QueryRequest(params);
  List futures = callBack(sreq.responses, request);

  int nextCount = 0;

  for(Future future : futures) {
QueryResponse response = future.get().getResponse();
NamedList analytics = 
(NamedList)response.getResponse().get("analytics");
Integer c = (Integer)analytics.get("mycount");
nextCount += c.intValue();
  }

  NamedList merged = new NamedList();
  merged.add("mycount", nextCount);
  rb.rsp.add("analytics", merged);
}
  }

{code}




> Add IterativeMergeStrategy to support Parallel Iterative Algorithms
> ---
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
> 

[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-
Description: 
This ticket builds on the existing AnalyticsQuery / MergeStrategy framework by 
adding the abstract class IterativeMergeStrategy,  which has built-in support 
for call-backs to the shards. The IterativeMergeStrategy is designed to support 
the execution of Parallel iterative Algorithms, such as Gradient Descent, 
inside of Solr.

To use the IterativeMergeStrategy you extend it and implement process(). This 
gives you access to the callback() method which makes it easy to make repeated 
calls to all the shards and run algorithms that require iteration.

Below is as example of a class that extends IterativeMergeStrategy. In this 
example it collects the *count* from the shards and then calls back to shards 
executing *!count* sending it merged counts from all the shards. 

{code}

class TestIterative extends IterativeMergeStrategy  {

public void process(ResponseBuilder rb, ShardRequest sreq) throws Exception 
{
  int count = 0;
  for(ShardResponse shardResponse : sreq.responses) {
NamedList response = shardResponse.getSolrResponse().getResponse();
NamedList analytics = (NamedList)response.get("analytics");
Integer c = (Integer)analytics.get("mycount");
count += c.intValue();
  }

  ModifiableSolrParams params = new ModifiableSolrParams();
  params.add("distrib", "false");
  params.add("fq","{!count base="+count+"}");
  params.add("q","*:*");


  /*
  *  Call back to all the shards in the response and process the result.
   */

  QueryRequest request = new QueryRequest(params);
  List futures = callBack(sreq.responses, request);

  int nextCount = 0;

  for(Future future : futures) {
QueryResponse response = future.get().getResponse();
NamedList analytics = 
(NamedList)response.getResponse().get("analytics");
Integer c = (Integer)analytics.get("mycount");
nextCount += c.intValue();
  }

  NamedList merged = new NamedList();
  merged.add("mycount", nextCount);
  rb.rsp.add("analytics", merged);
}
  }

{code}



  was:
This ticket builds on the existing AnalyticsQuery / MergeStrategy framework by 
adding the abstract class IterativeMergeStrategy,  which has built-in support 
for call-backs to the shards. The IterativeMergeStrategy is designed to support 
the execution of Parallel iterative Algorithms, such as Gradient Descent, 
inside of Solr.

To use the IterativeMergeStrategy you extend it and implement process(). This 
gives you access to the callback() method which makes it easy to make repeated 
calls to all the shards and run algorithms that require iteration.





> Add IterativeMergeStrategy to support Parallel Iterative Algorithms
> ---
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
> by adding the abstract class IterativeMergeStrategy,  which has built-in 
> support for call-backs to the shards. The IterativeMergeStrategy is designed 
> to support the execution of Parallel iterative Algorithms, such as Gradient 
> Descent, inside of Solr.
> To use the IterativeMergeStrategy you extend it and implement process(). This 
> gives you access to the callback() method which makes it easy to make 
> repeated calls to all the shards and run algorithms that require iteration.
> Below is as example of a class that extends IterativeMergeStrategy. In this 
> example it collects the *count* from the shards and then calls back to shards 
> executing *!count* sending it merged counts from all the shards. 
> {code}
> class TestIterative extends IterativeMergeStrategy  {
> public void process(ResponseBuilder rb, ShardRequest sreq) throws 
> Exception {
>   int count = 0;
>   for(ShardResponse shardResponse : sreq.responses) {
> NamedList response = shardResponse.getSolrResponse().getResponse();
> NamedList analytics = (NamedList)response.get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> count += c.intValue();
>   }
>   ModifiableSolrParams params = new ModifiableSolrParams();
>   params.add("distrib", "false");
>   params.add("fq","{!count base="+count+"}");
>   params.add("q","*:*");
>   /*
>   *  Call back to all the shards in the response and process the result.
>*/
>   QueryRequest request = new 

[jira] [Created] (SOLR-8428) create an 'all' permission to protect all paths

2015-12-16 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8428:


 Summary: create an 'all' permission to protect all paths
 Key: SOLR-8428
 URL: https://issues.apache.org/jira/browse/SOLR-8428
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul


There should be a well-known permission called all which can include all paths 
served by Solr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-16 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8416:
--
Attachment: SOLR-8416.patch

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support running Parallel Iterative Algorithms inside of Solr

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-
Summary: Add IterativeMergeStrategy to support running Parallel Iterative 
Algorithms inside of Solr  (was: Add IterativeMergeStrategy to support Parallel 
Iterative Algorithms)

> Add IterativeMergeStrategy to support running Parallel Iterative Algorithms 
> inside of Solr
> --
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
> by adding the abstract class IterativeMergeStrategy,  which has built-in 
> support for call-backs to the shards. The IterativeMergeStrategy is designed 
> to support the execution of Parallel iterative Algorithms, such as Gradient 
> Descent, inside of Solr.
> To use the IterativeMergeStrategy you extend it and implement process(). This 
> gives you access to the callback() method which makes it easy to make 
> repeated calls to all the shards and run algorithms that require iteration.
> Below is an example of a class that extends IterativeMergeStrategy. In this 
> example it collects the *count* from the shards and then calls back to shards 
> executing the *!count* AnalyticsQuery and sending it merged counts from all 
> the shards. 
> {code}
> class TestIterative extends IterativeMergeStrategy  {
> public void process(ResponseBuilder rb, ShardRequest sreq) throws 
> Exception {
>   int count = 0;
>   for(ShardResponse shardResponse : sreq.responses) {
> NamedList response = shardResponse.getSolrResponse().getResponse();
> NamedList analytics = (NamedList)response.get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> count += c.intValue();
>   }
>   ModifiableSolrParams params = new ModifiableSolrParams();
>   params.add("distrib", "false");
>   params.add("fq","{!count base="+count+"}");
>   params.add("q","*:*");
>   /*
>   *  Call back to all the shards in the response and process the result.
>*/
>   QueryRequest request = new QueryRequest(params);
>   List futures = callBack(sreq.responses, request);
>   int nextCount = 0;
>   for(Future future : futures) {
> QueryResponse response = future.get().getResponse();
> NamedList analytics = 
> (NamedList)response.getResponse().get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> nextCount += c.intValue();
>   }
>   NamedList merged = new NamedList();
>   merged.add("mycount", nextCount);
>   rb.rsp.add("analytics", merged);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-16 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060396#comment-15060396
 ] 

Michael Sun commented on SOLR-8416:
---

A patch is uploaded. Here are some thoughts:

1. The patch pulls shard status from zk and returns if they are all active 
during preset time or throw exception.  An alternative is to wait for zk 
notifications but I am not sure how much is the gain.
2. The total wait time should be configurable to fit large cluster. Is it good 
to be a solr config or collection config? It's more natural to be collection 
config but it may be easy for user if it's a solr config that can be set in CM.



> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8368) Investigate a leader using older versions than it's replicas has for leader election peer sync after a 'crash' shutdown.

2015-12-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060426#comment-15060426
 ] 

Mark Miller edited comment on SOLR-8368 at 12/16/15 6:02 PM:
-

I've renamed this 'investigation'. Given we work from uncapped tlogs, something 
else happened here. I've been trying to figure out what using logs and 
attempting to replicate, but so far I don't have it. I have some more ideas to 
try, but the current attached patch is unnecessary.


was (Author: markrmil...@gmail.com):
I've renamed this investigation. Given we work from uncapped tlogs, something 
else happened here. I've been trying to figure out what using logs and 
attempting to replicate, but so far I don't have it. I have some more ideas to 
try, but the current attatched patch is unnecessary.

> Investigate a leader using older versions than it's replicas has for leader 
> election peer sync after a 'crash' shutdown.
> 
>
> Key: SOLR-8368
> URL: https://issues.apache.org/jira/browse/SOLR-8368
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8368.patch
>
>
> If we do it after like now, the correct leader may not be able to become 
> leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8368) Investigate a leader using older versions than it's replicas has for leader election peer sync after a 'crash' shutdown.

2015-12-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060426#comment-15060426
 ] 

Mark Miller commented on SOLR-8368:
---

I've renamed this investigation. Given we work from uncapped tlogs, something 
else happened here. I've been trying to figure out what using logs and 
attempting to replicate, but so far I don't have it. I have some more ideas to 
try, but the current attatched patch is unnecessary.

> Investigate a leader using older versions than it's replicas has for leader 
> election peer sync after a 'crash' shutdown.
> 
>
> Key: SOLR-8368
> URL: https://issues.apache.org/jira/browse/SOLR-8368
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8368.patch
>
>
> If we do it after like now, the correct leader may not be able to become 
> leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-
Fix Version/s: (was: 5.2)

> Add IterativeMergeStrategy to support Parallel Iterative Algorithms
> ---
>
> Key: SOLR-6398
> URL: https://issues.apache.org/jira/browse/SOLR-6398
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch, 
> SOLR-6398.patch
>
>
> This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
> by adding the abstract class IterativeMergeStrategy,  which has built-in 
> support for call-backs to the shards. The IterativeMergeStrategy is designed 
> to support the execution of Parallel iterative Algorithms, such as Gradient 
> Descent, inside of Solr.
> To use the IterativeMergeStrategy you extend it and implement process(). This 
> gives you access to the callback() method which makes it easy to make 
> repeated calls to all the shards and run algorithms that require iteration.
> Below is an example of a class that extends IterativeMergeStrategy. In this 
> example it collects the *count* from the shards and then calls back to shards 
> executing the *!count* AnalyticsQuery and sending it merged counts from all 
> the shards. 
> {code}
> class TestIterative extends IterativeMergeStrategy  {
> public void process(ResponseBuilder rb, ShardRequest sreq) throws 
> Exception {
>   int count = 0;
>   for(ShardResponse shardResponse : sreq.responses) {
> NamedList response = shardResponse.getSolrResponse().getResponse();
> NamedList analytics = (NamedList)response.get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> count += c.intValue();
>   }
>   ModifiableSolrParams params = new ModifiableSolrParams();
>   params.add("distrib", "false");
>   params.add("fq","{!count base="+count+"}");
>   params.add("q","*:*");
>   /*
>   *  Call back to all the shards in the response and process the result.
>*/
>   QueryRequest request = new QueryRequest(params);
>   List futures = callBack(sreq.responses, request);
>   int nextCount = 0;
>   for(Future future : futures) {
> QueryResponse response = future.get().getResponse();
> NamedList analytics = 
> (NamedList)response.getResponse().get("analytics");
> Integer c = (Integer)analytics.get("mycount");
> nextCount += c.intValue();
>   }
>   NamedList merged = new NamedList();
>   merged.add("mycount", nextCount);
>   rb.rsp.add("analytics", merged);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060441#comment-15060441
 ] 

Mark Miller commented on SOLR-8416:
---

bq. 1. The patch pulls shard status from zk and returns if they are all active 
during preset time or throw exception. An alternative is to wait for zk 
notifications but I am not sure how much is the gain.

I agree, the complication is probably not worth it.


bq. Is it good to be a solr config or collection config?

I would make it a solr config - we need that ease of use at least. Later, if 
someone wants to override per collection or something, we can look at adding 
that in a way that you can override.

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8191) CloudSolrStream close method NullPointerException

2015-12-16 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060895#comment-15060895
 ] 

Joel Bernstein commented on SOLR-8191:
--

Ok, this looks fine to me. 

> CloudSolrStream close method NullPointerException
> -
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8191) Guard against CloudSolrStream close method NullPointerException

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-8191.

Resolution: Fixed

> Guard against CloudSolrStream close method NullPointerException
> ---
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3.2 bug fix release

2015-12-16 Thread Ishan Chattopadhyaya
Anshum,
If there happens to be a security bugfix release, I'd like to have
SOLR-8373 included as well. It is a deal breaker for anyone who uses
Kerberos support and wants to have more than one solr node per host.
Thanks,
Ishan

On Thu, Dec 17, 2015 at 2:45 AM, Upayavira  wrote:

> Why don't people just upgrade to 5.4? Why do we need another release in
> the 5.3.x range?
>
> Upayavira
>
> On Wed, Dec 16, 2015, at 09:12 PM, Shawn Heisey wrote:
> > On 12/16/2015 1:08 PM, Anshum Gupta wrote:
> > > There are a bunch of important bug fixes that call for a 5.3.2 in my
> > > opinion. I'm specifically talking about security plugins related fixes
> > > but I'm sure there are others too.
> > >
> > > Unless someone else wants to do it, I'd volunteer to do the release
> > > and cut an RC next Tuesday.
> >
> > Sounds like a reasonable idea to me.  I assume these must be fixes that
> > are not yet backported.
> >
> > I happen to have the 5.3 branch on my dev system, with SOLR-6188
> > applied.  It is already up to date.  There's nothing in the 5.3.2
> > section of either CHANGES.txt file.  The svn log indicates that nothing
> > has been backported since the 5.3.1 release was cut.
> >
> > Perhaps SOLR-6188 could be added to the list of fixes to backport.  I
> > believe it's a benign change.
> >
> > Thinking about CHANGES.txt, this might work for the 5.3 branch:
> >
> > 
> > === Lucene 5.3.2 ===
> > All changes were backported from 5.4.0.
> >
> > Bug Fixes
> >
> > * LUCENE-: A description (Committer Name)
> > 
> >
> > If we decide it's a good idea to mention the release in trunk and
> > branch_5x, something like the following might work, because that file
> > should already contain the full change descriptions:
> >
> > 
> > === Lucene 5.3.2 ===
> > The following issues were backported from 5.4.0:
> > LUCENE-
> > LUCENE-
> > 
> >
> > Thanks,
> > Shawn
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: 5.3.2 bug fix release

2015-12-16 Thread Steve Rowe
Historical aside: I was thinking we’d never produced a bugfix release on a 
previous minor branch, but I went looking and found that 4.9.1, released 
9/21/14, was just such a one: 4.10.0 was released on 9/4/14.  AFAICT Mike 
McCandless initially proposed a 4.9.1 release on 9/16/14 here: 
.

Steve

> On Dec 16, 2015, at 4:15 PM, Upayavira  wrote:
> 
> Why don't people just upgrade to 5.4? Why do we need another release in
> the 5.3.x range?
> 
> Upayavira
> 
> On Wed, Dec 16, 2015, at 09:12 PM, Shawn Heisey wrote:
>> On 12/16/2015 1:08 PM, Anshum Gupta wrote:
>>> There are a bunch of important bug fixes that call for a 5.3.2 in my
>>> opinion. I'm specifically talking about security plugins related fixes
>>> but I'm sure there are others too.
>>> 
>>> Unless someone else wants to do it, I'd volunteer to do the release
>>> and cut an RC next Tuesday.
>> 
>> Sounds like a reasonable idea to me.  I assume these must be fixes that
>> are not yet backported.
>> 
>> I happen to have the 5.3 branch on my dev system, with SOLR-6188
>> applied.  It is already up to date.  There's nothing in the 5.3.2
>> section of either CHANGES.txt file.  The svn log indicates that nothing
>> has been backported since the 5.3.1 release was cut.
>> 
>> Perhaps SOLR-6188 could be added to the list of fixes to backport.  I
>> believe it's a benign change.
>> 
>> Thinking about CHANGES.txt, this might work for the 5.3 branch:
>> 
>> 
>> === Lucene 5.3.2 ===
>> All changes were backported from 5.4.0.
>> 
>> Bug Fixes
>> 
>> * LUCENE-: A description (Committer Name)
>> 
>> 
>> If we decide it's a good idea to mention the release in trunk and
>> branch_5x, something like the following might work, because that file
>> should already contain the full change descriptions:
>> 
>> 
>> === Lucene 5.3.2 ===
>> The following issues were backported from 5.4.0:
>> LUCENE-
>> LUCENE-
>> 
>> 
>> Thanks,
>> Shawn
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061015#comment-15061015
 ] 

Dawid Weiss commented on LUCENE-6933:
-

After some more digging and experiments it seems realistic that the following 
multi-step process will get us the goals above.
* cat /dev/null on all jar files (and possibly other binaries) directly on the 
SVN dump,
* create local SVN repo with the above, preserving dummy commits so that 
version numbers match Apache's SVN
* use {{git-svn}} to mirror (separately) {{lucene/java/*}}, {{lucene/dev/*}} 
and Solr's pre-merge history.
* import those separate history trees into one git repo, use grafts and branch 
filtering to stitch them together.
* do any finalizing cleanups (correct commit author addresses, clean up any 
junk branches, tags, add actual release tags throughout the history).

I'll proceed and try to do all the above locally. If it works, I'll push a 
"test" repo to github so that folks can inspect. Everything takes ages. 
Patience.


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8191) Guard against CloudSolrStream close method NullPointerException

2015-12-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060931#comment-15060931
 ] 

ASF subversion and git services commented on SOLR-8191:
---

Commit 1720460 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1720460 ]

SOLR-8191: Gaurd against CloudSolrStream close method NullPointerException

> Guard against CloudSolrStream close method NullPointerException
> ---
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6938) Convert build to work with Git rather than SVN.

2015-12-16 Thread Mark Miller (JIRA)
Mark Miller created LUCENE-6938:
---

 Summary: Convert build to work with Git rather than SVN.
 Key: LUCENE-6938
 URL: https://issues.apache.org/jira/browse/LUCENE-6938
 Project: Lucene - Core
  Issue Type: Task
Reporter: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1720085 - /lucene/cms/trunk/content/core/quickstart.mdtext

2015-12-16 Thread Chris Hostetter

I thought there was a macro to refer to the latest stable version# so 
edits like this didn't need to be made after every release?

And isn't there a redirect path for the same purpose?

Something like this i think (looking at other existing pages)...

 
 Lucene 
   {% include "content/latestversion.mdtext" %} Demo



: Date: Tue, 15 Dec 2015 08:25:20 -
: From: ans...@apache.org
: Reply-To: dev@lucene.apache.org
: To: comm...@lucene.apache.org
: Subject: svn commit: r1720085 -
: /lucene/cms/trunk/content/core/quickstart.mdtext
: 
: Author: anshum
: Date: Tue Dec 15 08:25:20 2015
: New Revision: 1720085
: 
: URL: http://svn.apache.org/viewvc?rev=1720085=rev
: Log:
: Updating to link to 5.4.0 demo
: 
: Modified:
: lucene/cms/trunk/content/core/quickstart.mdtext
: 
: Modified: lucene/cms/trunk/content/core/quickstart.mdtext
: URL: 
http://svn.apache.org/viewvc/lucene/cms/trunk/content/core/quickstart.mdtext?rev=1720085=1720084=1720085=diff
: ==
: --- lucene/cms/trunk/content/core/quickstart.mdtext (original)
: +++ lucene/cms/trunk/content/core/quickstart.mdtext Tue Dec 15 08:25:20 2015
: @@ -5,5 +5,5 @@ in the documentation for that release.
:  
:  The most recent versions can also be found online:
:  
: -- Lucene 
5.3.1 Demo
: +- Lucene 
5.4.0 Demo
:  
: 
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15223 - Failure!

2015-12-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15223/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=3817, 
name=zkCallback-407-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=3815, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[117119464D601892]-SendThread(127.0.0.1:58528),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:230)  
   at 
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1185)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1110)3) 
Thread[id=3816, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[117119464D601892]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
4) Thread[id=4116, name=zkCallback-407-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=4117, 
name=zkCallback-407-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=3817, name=zkCallback-407-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
at 

[jira] [Commented] (SOLR-8190) Implement Closeable on TupleStream

2015-12-16 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060966#comment-15060966
 ] 

Joel Bernstein commented on SOLR-8190:
--

Gave this a quick review. I like the idea of implementing closable, but in this 
case close() would get called twice for each test.

Maybe the better approach is to call close() in the finally block of the 
methods that call open(). 

> Implement Closeable on TupleStream
> --
>
> Key: SOLR-8190
> URL: https://issues.apache.org/jira/browse/SOLR-8190
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8190.patch, SOLR-8190.patch
>
>
> Implementing Closeable on TupleStream provides the ability to use 
> try-with-resources 
> (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)
>  in tests and in practice. This prevents TupleStreams from being left open 
> when there is an error in the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061035#comment-15061035
 ] 

Dawid Weiss edited comment on LUCENE-6933 at 12/16/15 10:38 PM:


They will break (because we plan to remove JARs and binary blobs). They are 
only partially correct anyway (no history past Solr/Lucene merger). You should 
be able to rebase custom patches fairly easily though since the *content* of 
each SVN revision should be identical, only commit hashes will differ.


was (Author: dweiss):
They will break (because we plan to remove JARs and binary blobs). They are 
only partially correct anyway (no history past Solr/Lucene merger).

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-16 Thread Mark Miller
I filed LUCENE-6937 as a parent issue for an SVN->Git migration. I've
linked the issue that Dawid is working on, as well as a new issue for
converting the build to work correctly in a Git checkout rather than SVN.

- Mark

On Tue, Dec 15, 2015 at 1:26 PM Mark Miller  wrote:

> Let's just make some JIRA issues. I'm not worried about volunteers for any
> of it yet, just a direction we agree upon. Once we know where we are going,
> we generally don't have a big volunteer problem. We haven't heard from Uwe
> yet, but really does seem like moving to Git makes the most sense.
>
> I'm certainly willing to spend some free time on this.
>
> - Mark
>
> On Tue, Dec 15, 2015 at 1:22 PM Dawid Weiss  wrote:
>
>>
>> Oh, just for completeness -- moving to git is not just about the version
>> management, it's also:
>>
>> 1) all the scripts that currently do validations, etc.
>> 2) what to do with svn:* properties
>> 3) what to do with empty folders (not available in git).
>>
>> I don't volunteer to solve these :)
>>
>> Dawid
>>
>>
>> On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss 
>> wrote:
>>
>>>
>>> Ok, give me some time and I'll see what I can achieve. Now that I
>>> actually wrote an SVN dump parser (validator and serializer) things are
>>> under much better control...
>>>
>>> I'll try to achieve the following:
>>>
>>> 1) selectively drop unnecessary stuff from history (cms/, javadocs/,
>>> JARs and perhaps other binaries),
>>> 2) *preserve* history of all core sources. So svn log IndexWriter has to
>>> go back all the way back to when Doug was young and pretty. Ooops, he's
>>> still pretty of course.
>>> 3) provide a way to link git history with svn revisions. I would,
>>> ideally, include a "imported from svn:rev XXX" in the commit log message.
>>> 4) annotate release tags and branches. I don't care much about interim
>>> branches -- they are not important to me (please speak up if you think
>>> otherwise).
>>>
>>> Dawid
>>>
>>> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>>>
 If Dawid is volunteering to sort out this mess, +1 to let him make it
 a move to git. I don't care if we disagree about JARs, I trust he will
 do a good job and that is more important.

 On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
 wrote:
 >
 > It's not true that nobody is working on this. I have been working on
 the SVN
 > dump in the meantime. You would not believe how incredibly complex the
 > process of processing that (remote) dump is. Let me highlight a few
 key
 > issues:
 >
 > 1) There is no "one" Lucene SVN repository that can be transferred to
 git.
 > The history is a mess. Trunk, branches, tags -- all change paths at
 various
 > points in history. Entire projects are copied from *outside* the
 official
 > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator,
 for
 > example).
 >
 > 2) The history of commits to Lucene's subpath of the SVN is ~50k
 commits.
 > ASF's commit history in which those 50k commits live is 1.8 *million*
 > commits. I think the git-svn sync crashes due to the sheer number of
 (empty)
 > commits in between actual changes.
 >
 > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
 > patch, for example, but there are others (the second larger is
 190megs, the
 > third is 136 megs).
 >
 > 4) The size of JARs is really not an issue. The entire SVN repo I
 mirrored
 > locally (including empty interim commits to cater for svn:mergeinfos)
 is 4G.
 > If you strip the stuff like javadocs and side projects (Nutch, Tika,
 Mahout)
 > then I bet the entire history can fit in 1G total. Of course
 stripping JARs
 > is also doable.
 >
 > 5) There is lots of junk at the main SVN path so you can't just
 version the
 > top-level folder. If you wanted to checkout /asf/lucene then the size
 of the
 > resulting folder is enormous -- I terminated the checkout after I
 reached
 > over 20 gigs. Well, technically you *could* do it, it'd preserve
 perfect
 > history, but I wouldn't want to git co a past version that checks out
 all
 > the tags, branches, etc. This has to be mapped in a sensible way.
 >
 > What I think is that all the above makes (straightforward) conversion
 to git
 > problematic. Especially moving paths are a problem -- how to mark
 tags/
 > branches, where the main line of development is, etc. This conversion
 would
 > have to be guided and hand-tuned to make sense. This effort would
 only pay
 > for itself if we move to git, otherwise I don't see the benefit.
 Paul's
 > script is fine for keeping short-term history.
 >
 > Dawid
 >
 > P.S. Either the SVN repo at Apache is broken or 

[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-16 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060912#comment-15060912
 ] 

Michael Sun commented on SOLR-8416:
---

Thanks [~markrmil...@gmail.com] for reviewing.  

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8426) Make /export, /stream and /sql handlers implicit

2015-12-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-8426:

Attachment: SOLR-8426.patch

Patch which fixes a failure in TestSortingResponseWriter. Its solrconfig was 
actually bad and had the entire  section inside the request handler 
definition!

All tests pass. I'll commit this.

> Make /export, /stream and /sql handlers implicit
> 
>
> Key: SOLR-8426
> URL: https://issues.apache.org/jira/browse/SOLR-8426
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-8426.patch, SOLR-8426.patch
>
>
> These handlers should always be present for important features to work and 
> the documentation in solrconfig.xml explicitly asks users not to modify their 
> configuration. Therefore, I think we should enable them implicitly and remove 
> from solrconfig.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3.2 bug fix release

2015-12-16 Thread Shawn Heisey
On 12/16/2015 2:15 PM, Upayavira wrote:
> Why don't people just upgrade to 5.4? Why do we need another release in
> the 5.3.x range?

I am using a third-party custom Solr plugin.  The latest version of that
plugin (which I have on my dev server) has only been certified to work
with Solr 5.3.x.  There's a chance that it won't work with 5.4, so I
cannot use that version yet.  If I happen to need any of the fixes that
are being backported, an official 5.3.2 release would allow me to use
official binaries, which will make my managers much more comfortable
than a version that I compile myself.

Additionally, the IT change policies in place for many businesses
require a huge amount of QA work for software upgrades, but those
policies may be relaxed for hotfixes and upgrades that are *only*
bugfixes.  For users operating under those policies, a bugfix release
will allow them to fix bugs immediately, rather than spend several weeks
validating a new minor release.

There is a huge amount of interest in the new security features in
5.3.x, functionality that has a number of critical problems.  Lots of
users who need those features have already deployed 5.3.1.  Many of the
critical problems are fixed in 5.4, and these are the fixes that Anshum
wants to make available in 5.3.2.  If a user is in either of the
situations that I outlined above, upgrading to 5.4 may be unrealistic.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061035#comment-15061035
 ] 

Dawid Weiss commented on LUCENE-6933:
-

They will break (because we plan to remove JARs and binary blobs). They are 
only partially correct anyway (no history past Solr/Lucene merger).

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061043#comment-15061043
 ] 

Mark Miller commented on LUCENE-6937:
-

As mentioned in LUCENE-6933, https://issues.apache.org/jira/browse/INFRA-5266 
is a good root issue to explore the INFRA tickets that a previous svn->git 
migration had to file.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7885) Add support for loading HTTP resources

2015-12-16 Thread Aaron LaBella (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060905#comment-15060905
 ] 

Aaron LaBella commented on SOLR-7885:
-

BTW, trying to call command=reload-config=... on the DIH doesn't 
work -- since, if anything, it just updates the in memory config -- it doesn't 
persist the config to the local file system, which was half the point of the 
patch.

> Add support for loading HTTP resources
> --
>
> Key: SOLR-7885
> URL: https://issues.apache.org/jira/browse/SOLR-7885
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler, SolrJ
>Affects Versions: 5.3
>Reporter: Aaron LaBella
> Attachments: SOLR-7885-1.patch, SOLR-7885-2.patch
>
>
> I have a need to be able to load data import handler configuration files from 
> an HTTP server instead of the local file system.  So, I modified 
> {code}org.apache.solr.core.SolrResourceLoader{code} and some of the 
> respective dataimport files in {code}org.apache.solr.handler.dataimport{code} 
> to be able to support doing this.  
> {code}solrconfig.xml{code} now has the option to define a parameter: 
> *configRemote*, and if defined (and it's an HTTP(s) URL), it'll attempt to 
> load the resource.  If successfully, it'll also persist the resource to the 
> local file system so that it is available on a solr server restart per chance 
> that the remote resource is currently unavailable.
> Lastly, to be consistent with the pattern that already exists in 
> SolrResourceLoader, this feature is *disabled* by default, and requires the 
> setting of an additional JVM property: 
> {code}-Dsolr.allow.http.resourceloading=true{code}.
> Please review and let me know if there is anything else that needs to be done 
> in order for this patch to make the next release.  As far as I can tell, it's 
> fully tested and ready to go.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8430) ReplicationHandler throttling should be applied across all concurrent replication requests

2015-12-16 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-8430:
---

 Summary: ReplicationHandler throttling should be applied across 
all concurrent replication requests
 Key: SOLR-8430
 URL: https://issues.apache.org/jira/browse/SOLR-8430
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: 5.5, Trunk


The ability to throttle replication was added in SOLR-6485 but the throttle is 
applied only per-request so if N replicas go into recovery together the actual 
outgoing network usage would be N * maxWriteMBPerSec.

Ideally there should be a way to apply such limits per-node instead of only 
per-leader but I digress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Updated] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Dawid Weiss
We can't touch Apache's SVN so it definitely stays! :)

This was also something that crossed my mind -- we effectively have
multiple separate projects in one git repo. While it's something SVN can
take, it's a more problematic issue with git. I don't know if one Apache
project can have multiple git repos, but I'd assume it'd be a natural way
out for pylucene? I can also include it in the git repo, of course, but I'd
opt for having a separate branch for it (one that is orphaned from actual
Lucene code).

Dawid

On Wed, Dec 16, 2015 at 10:33 PM, Andi Vajda  wrote:

>
> On Wed, 16 Dec 2015, Dawid Weiss (JIRA) wrote:
>
>
>> [
>> https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>> ]
>>
>> Dawid Weiss updated LUCENE-6933:
>> 
>>Description:
>> Goals:
>> * selectively drop projects and core-irrelevant stuff:
>>  ** {{lucene/site}}
>>  ** {{lucene/nutch}}
>>  ** {{lucene/lucy}}
>>  ** {{lucene/tika}}
>>  ** {{lucene/hadoop}}
>>  ** {{lucene/mahout}}
>>  ** {{lucene/pylucene}}
>>
>
> Does dropping lucene/pylucene mean it stays back in SVN (fine) or it
> disappears (err, let's talk) ?
>
> Andi..
>
>
>  ** {{lucene/lucene.net}}
>>  ** {{lucene/old_versioned_docs}}
>>  ** {{lucene/openrelevance}}
>>  ** {{lucene/board-reports}}
>>  ** {{lucene/java/site}}
>>  ** {{lucene/java/nightly}}
>>  ** {{lucene/dev/nightly}}
>>  ** {{lucene/dev/lucene2878}}
>>  ** {{lucene/sandbox/luke}}
>>  ** {{lucene/solr/nightly}}
>> * preserve the history of all changes to core sources (Solr and Lucene).
>>  ** {{lucene/java}}
>>  ** {{lucene/solr}}
>>  ** {{lucene/dev/trunk}}
>>  ** {{lucene/dev/branches/branch_3x}}
>>  ** {{lucene/dev/branches/branch_4x}}
>>  ** {{lucene/dev/branches/branch_5x}}
>> * provide a way to link git commits and history with svn revisions (amend
>> the log message).
>> * annotate release tags
>> * deal with large binary blobs (JARs): keep empty files instead for their
>> historical reference only.
>>
>> Non goals:
>> * no need to preserve "exact" merge history from SVN (see "impossible"
>> below).
>> * Ability to build ancient versions is not an issue.
>>
>> Impossible:
>> * It is not possible to preserve SVN "merge history" because of the
>> following reasons:
>>  ** Each commit in SVN operates on individual files. So one commit can
>> "copy" (and record a merge) files from anywhere in the object tree, even
>> modifying them along the way. There simply is no equivalent for this in git.
>>  ** There are historical commits in SVN that apply changes to multiple
>> branches in one commit ({{r1569975}}).
>> * Because exact merge tracking is impossible then what follows is that
>> exact "linearized" history of a given file is also impossible to record.
>> Let's say changes X, Y and Z have been applied to a branch of a file A and
>> then merged back. In git, this would be reflected as a single commit
>> flattening X, Y and Z (on the target branch) and three independent commits
>> on the branch. The "copy-from" link from one branch to another cannot be
>> represented because, as mentioned, merges are done on entire branches in
>> git, not on individual files. Yes, there are commits in SVN history that
>> have selective file merges (not entire branches).
>>
>>
>>  was:
>> Goals:
>> - selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
>> and perhaps other binaries),
>> - *preserve* history of all core sources. So svn log IndexWriter has to
>> go back all the way back to when Doug was young and pretty. Ooops, he's
>> still pretty of course.
>> - provide a way to link git history with svn revisions. I would, ideally,
>> include a "imported from svn:rev XXX" in the commit log message.
>> - annotate release tags and branches. I don't care much about interim
>> branches -- they are not important to me (please speak up if you think
>> otherwise).
>>
>> Non goals
>> - no need to preserve "exact" history from SVN (the project may skip
>> JARs, etc.). Ability to build ancient versions is not an issue.
>>
>>
>> Create a (cleaned up) SVN history in git
>>> 
>>>
>>> Key: LUCENE-6933
>>> URL: https://issues.apache.org/jira/browse/LUCENE-6933
>>> Project: Lucene - Core
>>>  Issue Type: Task
>>>Reporter: Dawid Weiss
>>>Assignee: Dawid Weiss
>>>
>>> Goals:
>>> * selectively drop projects and core-irrelevant stuff:
>>>   ** {{lucene/site}}
>>>   ** {{lucene/nutch}}
>>>   ** {{lucene/lucy}}
>>>   ** {{lucene/tika}}
>>>   ** {{lucene/hadoop}}
>>>   ** {{lucene/mahout}}
>>>   ** {{lucene/pylucene}}
>>>   ** {{lucene/lucene.net}}
>>>   ** {{lucene/old_versioned_docs}}
>>>   ** {{lucene/openrelevance}}
>>>   ** {{lucene/board-reports}}
>>>   ** {{lucene/java/site}}
>>>   ** {{lucene/java/nightly}}
>>>   ** {{lucene/dev/nightly}}
>>>   ** {{lucene/dev/lucene2878}}
>>>   ** 

Re: [jira] [Updated] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Dawid Weiss
> I personally don't care. Git has been a non-issue in PyLucene.
> It can move with Lucene or stay in SVN, either way is fine by me.


I don't see a problem with it staying in SVN, we'd just clean up dev, much
like it has been done before when Solr and Lucene were merged (in fact,
this is what you see in the git mirror, all the earlier history is
discarded).

Dawid


Re: [jira] [Updated] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Upayavira
One project can have multiple git repos. Apparently there's one or
more with 100+ repos, so all is good there if pylucene wants to shift
to git also.

Upayavira


On Wed, Dec 16, 2015, at 09:48 PM, Dawid Weiss wrote:
>
>> I personally don't care. Git has been a non-issue in PyLucene.
>>
It can move with Lucene or stay in SVN, either way is fine by me.
>
> I don't see a problem with it staying in SVN, we'd just clean up dev,
> much like it has been done before when Solr and Lucene were merged (in
> fact, this is what you see in the git mirror, all the earlier history
> is discarded).
>
> Dawid
>


[jira] [Comment Edited] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061015#comment-15061015
 ] 

Dawid Weiss edited comment on LUCENE-6933 at 12/16/15 10:27 PM:


After some more digging and experiments it seems realistic that the following 
multi-step process will get us the goals above.
* cat /dev/null on all jar files (and possibly other binaries) directly on the 
SVN dump, or use https://rtyley.github.io/bfg-repo-cleaner/ to remove/ truncate 
them on the git repo
* create local SVN repo with the above, preserving dummy commits so that 
version numbers match Apache's SVN
* use {{git-svn}} to mirror (separately) {{lucene/java/*}}, {{lucene/dev/*}} 
and Solr's pre-merge history.
* import those separate history trees into one git repo, use grafts and branch 
filtering to stitch them together.
* do any finalizing cleanups (correct commit author addresses, clean up any 
junk branches, tags, add actual release tags throughout the history).

I'll proceed and try to do all the above locally. If it works, I'll push a 
"test" repo to github so that folks can inspect. Everything takes ages. 
Patience.



was (Author: dweiss):
After some more digging and experiments it seems realistic that the following 
multi-step process will get us the goals above.
* cat /dev/null on all jar files (and possibly other binaries) directly on the 
SVN dump,
* create local SVN repo with the above, preserving dummy commits so that 
version numbers match Apache's SVN
* use {{git-svn}} to mirror (separately) {{lucene/java/*}}, {{lucene/dev/*}} 
and Solr's pre-merge history.
* import those separate history trees into one git repo, use grafts and branch 
filtering to stitch them together.
* do any finalizing cleanups (correct commit author addresses, clean up any 
junk branches, tags, add actual release tags throughout the history).

I'll proceed and try to do all the above locally. If it works, I'll push a 
"test" repo to github so that folks can inspect. Everything takes ages. 
Patience.


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-7885) Add support for loading HTTP resources

2015-12-16 Thread Aaron LaBella (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060901#comment-15060901
 ] 

Aaron LaBella commented on SOLR-7885:
-

I'm closing this issue as I literally just took my patch and moved it into a 
custom RequestHandler instead.
Thanks.

> Add support for loading HTTP resources
> --
>
> Key: SOLR-7885
> URL: https://issues.apache.org/jira/browse/SOLR-7885
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler, SolrJ
>Affects Versions: 5.3
>Reporter: Aaron LaBella
> Attachments: SOLR-7885-1.patch, SOLR-7885-2.patch
>
>
> I have a need to be able to load data import handler configuration files from 
> an HTTP server instead of the local file system.  So, I modified 
> {code}org.apache.solr.core.SolrResourceLoader{code} and some of the 
> respective dataimport files in {code}org.apache.solr.handler.dataimport{code} 
> to be able to support doing this.  
> {code}solrconfig.xml{code} now has the option to define a parameter: 
> *configRemote*, and if defined (and it's an HTTP(s) URL), it'll attempt to 
> load the resource.  If successfully, it'll also persist the resource to the 
> local file system so that it is available on a solr server restart per chance 
> that the remote resource is currently unavailable.
> Lastly, to be consistent with the pattern that already exists in 
> SolrResourceLoader, this feature is *disabled* by default, and requires the 
> setting of an additional JVM property: 
> {code}-Dsolr.allow.http.resourceloading=true{code}.
> Please review and let me know if there is anything else that needs to be done 
> in order for this patch to make the next release.  As far as I can tell, it's 
> fully tested and ready to go.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7885) Add support for loading HTTP resources

2015-12-16 Thread Aaron LaBella (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron LaBella closed SOLR-7885.
---
Resolution: Not A Problem

> Add support for loading HTTP resources
> --
>
> Key: SOLR-7885
> URL: https://issues.apache.org/jira/browse/SOLR-7885
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler, SolrJ
>Affects Versions: 5.3
>Reporter: Aaron LaBella
> Attachments: SOLR-7885-1.patch, SOLR-7885-2.patch
>
>
> I have a need to be able to load data import handler configuration files from 
> an HTTP server instead of the local file system.  So, I modified 
> {code}org.apache.solr.core.SolrResourceLoader{code} and some of the 
> respective dataimport files in {code}org.apache.solr.handler.dataimport{code} 
> to be able to support doing this.  
> {code}solrconfig.xml{code} now has the option to define a parameter: 
> *configRemote*, and if defined (and it's an HTTP(s) URL), it'll attempt to 
> load the resource.  If successfully, it'll also persist the resource to the 
> local file system so that it is available on a solr server restart per chance 
> that the remote resource is currently unavailable.
> Lastly, to be consistent with the pattern that already exists in 
> SolrResourceLoader, this feature is *disabled* by default, and requires the 
> setting of an additional JVM property: 
> {code}-Dsolr.allow.http.resourceloading=true{code}.
> Please review and let me know if there is anything else that needs to be done 
> in order for this patch to make the next release.  As far as I can tell, it's 
> fully tested and ready to go.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8191) Guard against CloudSolrStream close method NullPointerException

2015-12-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060943#comment-15060943
 ] 

ASF subversion and git services commented on SOLR-8191:
---

Commit 1720461 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1720461 ]

SOLR-8191: Update CHANGES.txt

> Guard against CloudSolrStream close method NullPointerException
> ---
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8426) Make /export, /stream and /sql handlers implicit

2015-12-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060986#comment-15060986
 ] 

ASF subversion and git services commented on SOLR-8426:
---

Commit 1720468 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1720468 ]

SOLR-8426: Enable /export, /stream and /sql handlers by default and remove them 
from example configs

> Make /export, /stream and /sql handlers implicit
> 
>
> Key: SOLR-8426
> URL: https://issues.apache.org/jira/browse/SOLR-8426
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-8426.patch, SOLR-8426.patch
>
>
> These handlers should always be present for important features to work and 
> the documentation in solrconfig.xml explicitly asks users not to modify their 
> configuration. Therefore, I think we should enable them implicitly and remove 
> from solrconfig.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8190) Implement Closeable on TupleStream

2015-12-16 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061076#comment-15061076
 ] 

Kevin Risden commented on SOLR-8190:


The getTuples method was added recently which is the only place that open/close 
is called. This is where the try w/ resources should go instead of wrapping 
each test. I'll update the patch with changes. This should make the patch a lot 
simpler. While investigating this I found that there are now two new NPE issues 
w/ FacetStream and StatsStream with the same type cause as in SOLR-8191. 

> Implement Closeable on TupleStream
> --
>
> Key: SOLR-8190
> URL: https://issues.apache.org/jira/browse/SOLR-8190
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8190.patch, SOLR-8190.patch
>
>
> Implementing Closeable on TupleStream provides the ability to use 
> try-with-resources 
> (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)
>  in tests and in practice. This prevents TupleStreams from being left open 
> when there is an error in the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8426) Make /export, /stream and /sql handlers implicit

2015-12-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-8426:

Attachment: SOLR-8426.patch

> Make /export, /stream and /sql handlers implicit
> 
>
> Key: SOLR-8426
> URL: https://issues.apache.org/jira/browse/SOLR-8426
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-8426.patch
>
>
> These handlers should always be present for important features to work and 
> the documentation in solrconfig.xml explicitly asks users not to modify their 
> configuration. Therefore, I think we should enable them implicitly and remove 
> from solrconfig.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Updated] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Andi Vajda


On Wed, 16 Dec 2015, Dawid Weiss (JIRA) wrote:



[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6933:

   Description:
Goals:
* selectively drop projects and core-irrelevant stuff:
 ** {{lucene/site}}
 ** {{lucene/nutch}}
 ** {{lucene/lucy}}
 ** {{lucene/tika}}
 ** {{lucene/hadoop}}
 ** {{lucene/mahout}}
 ** {{lucene/pylucene}}


Does dropping lucene/pylucene mean it stays back in SVN (fine) or 
it disappears (err, let's talk) ?


Andi..


 ** {{lucene/lucene.net}}
 ** {{lucene/old_versioned_docs}}
 ** {{lucene/openrelevance}}
 ** {{lucene/board-reports}}
 ** {{lucene/java/site}}
 ** {{lucene/java/nightly}}
 ** {{lucene/dev/nightly}}
 ** {{lucene/dev/lucene2878}}
 ** {{lucene/sandbox/luke}}
 ** {{lucene/solr/nightly}}
* preserve the history of all changes to core sources (Solr and Lucene).
 ** {{lucene/java}}
 ** {{lucene/solr}}
 ** {{lucene/dev/trunk}}
 ** {{lucene/dev/branches/branch_3x}}
 ** {{lucene/dev/branches/branch_4x}}
 ** {{lucene/dev/branches/branch_5x}}
* provide a way to link git commits and history with svn revisions (amend the 
log message).
* annotate release tags
* deal with large binary blobs (JARs): keep empty files instead for their 
historical reference only.

Non goals:
* no need to preserve "exact" merge history from SVN (see "impossible" below).
* Ability to build ancient versions is not an issue.

Impossible:
* It is not possible to preserve SVN "merge history" because of the following 
reasons:
 ** Each commit in SVN operates on individual files. So one commit can "copy" 
(and record a merge) files from anywhere in the object tree, even modifying them along 
the way. There simply is no equivalent for this in git.
 ** There are historical commits in SVN that apply changes to multiple branches 
in one commit ({{r1569975}}).
* Because exact merge tracking is impossible then what follows is that exact "linearized" 
history of a given file is also impossible to record. Let's say changes X, Y and Z have been 
applied to a branch of a file A and then merged back. In git, this would be reflected as a single 
commit flattening X, Y and Z (on the target branch) and three independent commits on the branch. 
The "copy-from" link from one branch to another cannot be represented because, as 
mentioned, merges are done on entire branches in git, not on individual files. Yes, there are 
commits in SVN history that have selective file merges (not entire branches).


 was:
Goals:
- selectively drop unnecessary stuff from history (cms/, javadocs/, JARs and 
perhaps other binaries),
- *preserve* history of all core sources. So svn log IndexWriter has to go back 
all the way back to when Doug was young and pretty. Ooops, he's still pretty of 
course.
- provide a way to link git history with svn revisions. I would, ideally, include a 
"imported from svn:rev XXX" in the commit log message.
- annotate release tags and branches. I don't care much about interim branches 
-- they are not important to me (please speak up if you think otherwise).

Non goals
- no need to preserve "exact" history from SVN (the project may skip JARs, 
etc.). Ability to build ancient versions is not an issue.



Create a (cleaned up) SVN history in git


Key: LUCENE-6933
URL: https://issues.apache.org/jira/browse/LUCENE-6933
Project: Lucene - Core
 Issue Type: Task
   Reporter: Dawid Weiss
   Assignee: Dawid Weiss

Goals:
* selectively drop projects and core-irrelevant stuff:
  ** {{lucene/site}}
  ** {{lucene/nutch}}
  ** {{lucene/lucy}}
  ** {{lucene/tika}}
  ** {{lucene/hadoop}}
  ** {{lucene/mahout}}
  ** {{lucene/pylucene}}
  ** {{lucene/lucene.net}}
  ** {{lucene/old_versioned_docs}}
  ** {{lucene/openrelevance}}
  ** {{lucene/board-reports}}
  ** {{lucene/java/site}}
  ** {{lucene/java/nightly}}
  ** {{lucene/dev/nightly}}
  ** {{lucene/dev/lucene2878}}
  ** {{lucene/sandbox/luke}}
  ** {{lucene/solr/nightly}}
* preserve the history of all changes to core sources (Solr and Lucene).
  ** {{lucene/java}}
  ** {{lucene/solr}}
  ** {{lucene/dev/trunk}}
  ** {{lucene/dev/branches/branch_3x}}
  ** {{lucene/dev/branches/branch_4x}}
  ** {{lucene/dev/branches/branch_5x}}
* provide a way to link git commits and history with svn revisions (amend the 
log message).
* annotate release tags
* deal with large binary blobs (JARs): keep empty files instead for their 
historical reference only.
Non goals:
* no need to preserve "exact" merge history from SVN (see "impossible" below).
* Ability to build ancient versions is not an issue.
Impossible:
* It is not possible to preserve SVN "merge history" because of the following 
reasons:
  ** Each commit in SVN operates on individual files. So one commit can "copy" 
(and record a merge) files from anywhere in the object tree, even modifying them 

[jira] [Updated] (SOLR-8191) Guard against CloudSolrStream close method NullPointerException

2015-12-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8191:
-
Summary: Guard against CloudSolrStream close method NullPointerException  
(was: CloudSolrStream close method NullPointerException)

> Guard against CloudSolrStream close method NullPointerException
> ---
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Updated] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Andi Vajda


On Wed, 16 Dec 2015, Dawid Weiss wrote:


We can't touch Apache's SVN so it definitely stays! :)

This was also something that crossed my mind -- we effectively have
multiple separate projects in one git repo. While it's something SVN can
take, it's a more problematic issue with git. I don't know if one Apache
project can have multiple git repos, but I'd assume it'd be a natural way
out for pylucene? I can also include it in the git repo, of course, but I'd
opt for having a separate branch for it (one that is orphaned from actual
Lucene code).


I personally don't care. Git has been a non-issue in PyLucene.
It can move with Lucene or stay in SVN, either way is fine by me.

Andi..



Dawid

On Wed, Dec 16, 2015 at 10:33 PM, Andi Vajda  wrote:



On Wed, 16 Dec 2015, Dawid Weiss (JIRA) wrote:



[
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Dawid Weiss updated LUCENE-6933:

   Description:
Goals:
* selectively drop projects and core-irrelevant stuff:
 ** {{lucene/site}}
 ** {{lucene/nutch}}
 ** {{lucene/lucy}}
 ** {{lucene/tika}}
 ** {{lucene/hadoop}}
 ** {{lucene/mahout}}
 ** {{lucene/pylucene}}



Does dropping lucene/pylucene mean it stays back in SVN (fine) or it
disappears (err, let's talk) ?

Andi..


 ** {{lucene/lucene.net}}

 ** {{lucene/old_versioned_docs}}
 ** {{lucene/openrelevance}}
 ** {{lucene/board-reports}}
 ** {{lucene/java/site}}
 ** {{lucene/java/nightly}}
 ** {{lucene/dev/nightly}}
 ** {{lucene/dev/lucene2878}}
 ** {{lucene/sandbox/luke}}
 ** {{lucene/solr/nightly}}
* preserve the history of all changes to core sources (Solr and Lucene).
 ** {{lucene/java}}
 ** {{lucene/solr}}
 ** {{lucene/dev/trunk}}
 ** {{lucene/dev/branches/branch_3x}}
 ** {{lucene/dev/branches/branch_4x}}
 ** {{lucene/dev/branches/branch_5x}}
* provide a way to link git commits and history with svn revisions (amend
the log message).
* annotate release tags
* deal with large binary blobs (JARs): keep empty files instead for their
historical reference only.

Non goals:
* no need to preserve "exact" merge history from SVN (see "impossible"
below).
* Ability to build ancient versions is not an issue.

Impossible:
* It is not possible to preserve SVN "merge history" because of the
following reasons:
 ** Each commit in SVN operates on individual files. So one commit can
"copy" (and record a merge) files from anywhere in the object tree, even
modifying them along the way. There simply is no equivalent for this in git.
 ** There are historical commits in SVN that apply changes to multiple
branches in one commit ({{r1569975}}).
* Because exact merge tracking is impossible then what follows is that
exact "linearized" history of a given file is also impossible to record.
Let's say changes X, Y and Z have been applied to a branch of a file A and
then merged back. In git, this would be reflected as a single commit
flattening X, Y and Z (on the target branch) and three independent commits
on the branch. The "copy-from" link from one branch to another cannot be
represented because, as mentioned, merges are done on entire branches in
git, not on individual files. Yes, there are commits in SVN history that
have selective file merges (not entire branches).


 was:
Goals:
- selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
and perhaps other binaries),
- *preserve* history of all core sources. So svn log IndexWriter has to
go back all the way back to when Doug was young and pretty. Ooops, he's
still pretty of course.
- provide a way to link git history with svn revisions. I would, ideally,
include a "imported from svn:rev XXX" in the commit log message.
- annotate release tags and branches. I don't care much about interim
branches -- they are not important to me (please speak up if you think
otherwise).

Non goals
- no need to preserve "exact" history from SVN (the project may skip
JARs, etc.). Ability to build ancient versions is not an issue.


Create a (cleaned up) SVN history in git



Key: LUCENE-6933
URL: https://issues.apache.org/jira/browse/LUCENE-6933
Project: Lucene - Core
 Issue Type: Task
   Reporter: Dawid Weiss
   Assignee: Dawid Weiss

Goals:
* selectively drop projects and core-irrelevant stuff:
  ** {{lucene/site}}
  ** {{lucene/nutch}}
  ** {{lucene/lucy}}
  ** {{lucene/tika}}
  ** {{lucene/hadoop}}
  ** {{lucene/mahout}}
  ** {{lucene/pylucene}}
  ** {{lucene/lucene.net}}
  ** {{lucene/old_versioned_docs}}
  ** {{lucene/openrelevance}}
  ** {{lucene/board-reports}}
  ** {{lucene/java/site}}
  ** {{lucene/java/nightly}}
  ** {{lucene/dev/nightly}}
  ** {{lucene/dev/lucene2878}}
  ** {{lucene/sandbox/luke}}
  ** {{lucene/solr/nightly}}
* preserve the history of all changes to core sources (Solr and Lucene).
  ** {{lucene/java}}
  ** {{lucene/solr}}
  ** 

[jira] [Commented] (SOLR-7885) Add support for loading HTTP resources

2015-12-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060946#comment-15060946
 ] 

Jan Høydahl commented on SOLR-7885:
---

With SolrCloud, you could use ZooKeeper as integration point and let the 
external system push updated config to ZK and then call the {{reload-config}} 
API, or even reload the whole collection.

Another future solution could be adding a REST API for editing DIH-configs, 
along the lines of what we already have for schema, solrconfig and 
security.json. Cannot find any open JIRAs for such a feature though.

> Add support for loading HTTP resources
> --
>
> Key: SOLR-7885
> URL: https://issues.apache.org/jira/browse/SOLR-7885
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler, SolrJ
>Affects Versions: 5.3
>Reporter: Aaron LaBella
> Attachments: SOLR-7885-1.patch, SOLR-7885-2.patch
>
>
> I have a need to be able to load data import handler configuration files from 
> an HTTP server instead of the local file system.  So, I modified 
> {code}org.apache.solr.core.SolrResourceLoader{code} and some of the 
> respective dataimport files in {code}org.apache.solr.handler.dataimport{code} 
> to be able to support doing this.  
> {code}solrconfig.xml{code} now has the option to define a parameter: 
> *configRemote*, and if defined (and it's an HTTP(s) URL), it'll attempt to 
> load the resource.  If successfully, it'll also persist the resource to the 
> local file system so that it is available on a solr server restart per chance 
> that the remote resource is currently unavailable.
> Lastly, to be consistent with the pattern that already exists in 
> SolrResourceLoader, this feature is *disabled* by default, and requires the 
> setting of an additional JVM property: 
> {code}-Dsolr.allow.http.resourceloading=true{code}.
> Please review and let me know if there is anything else that needs to be done 
> in order for this patch to make the next release.  As far as I can tell, it's 
> fully tested and ready to go.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8426) Make /export, /stream and /sql handlers implicit

2015-12-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-8426.
-
Resolution: Fixed
  Assignee: Shalin Shekhar Mangar

> Make /export, /stream and /sql handlers implicit
> 
>
> Key: SOLR-8426
> URL: https://issues.apache.org/jira/browse/SOLR-8426
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-8426.patch, SOLR-8426.patch
>
>
> These handlers should always be present for important features to work and 
> the documentation in solrconfig.xml explicitly asks users not to modify their 
> configuration. Therefore, I think we should enable them implicitly and remove 
> from solrconfig.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-16 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061027#comment-15061027
 ] 

Upayavira commented on LUCENE-6933:
---

[~dweiss] Just for clarity's sake - what impact will this have on existing 
clones/forks on Github? Would they continue to work, or break?

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-16 Thread Mark Miller (JIRA)
Mark Miller created LUCENE-6937:
---

 Summary: Migrate Lucene project from SVN to Git.
 Key: LUCENE-6937
 URL: https://issues.apache.org/jira/browse/LUCENE-6937
 Project: Lucene - Core
  Issue Type: Task
Reporter: Mark Miller


See mailing list discussion: 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-445) Update Handlers abort with bad documents

2015-12-16 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-445:
--
Attachment: SOLR-445.patch


I started playing arround with this patch a bit to see if I could help move it 
forward.  I'm a little out of my depth with a lot of the details of how 
distribute updates work, but the more I tried to make sense of it, the more 
convinced I was that there was a lot of things that just weren't very well 
accounted for in the existing tests (which were consistently failing, but the 
failures themselves weren't consistent between runs).

Here's a summary of what's new/different in the patch i'm attaching...


* DistributedUpdateProcessor.DistribPhase
** not sure why this enum was made non-static in earlier patches ... i reverted 
this unneeded change.
* TolerantUpdateProcessor
** processDelete
*** Method has a couple of glaringly obvious bugs, that aparently don't trip 
under the current tests
*** added several nocommits of things that jumpted out at me
* DistribTolerantUpdateProcessorTest
** beefed up assertion msgs in assertUSucceedsWithErrors
** fixed testValidAdds so it's not dead code
** testInvalidAdds
*** sanity check code wasn't passing reliably
 details of what failed are lost depending on how update is routed (random 
seed)
 relaxed this check to be reliable with a nocommit comment to see if we can 
tighten it up
*** assuming sanity check passes assertUSucceedsWithErrors (still) fails on 
some seeds w/null error list
 I'm Guessing this is what anshum alluded to in last comment: "Node2 as of 
now return an HTTP OK and doesn't throw an exception, the StreamingSolrClient 
used but the Distributed Updated Processor doesn't realize the error that was 
consumed by the leader of shard 1"
* TestTolerantUpdateProcessorCloud
** New MiniSolrCloudCluster based test to try and demonstrate all the possible 
distrib code paths i could think of (see below)

TestTolerantUpdateProcessorCloud is the real meat of what i've added here.  
Starting with the basic behavior/assertions currently tested in 
TolerantUpdateProcessorTest, I built it up to try and exorcise every possible 
distribute update code path i could imagine (updates with docs all on one shard 
some of which fail, updates with docs for diff shards and some from each shard 
fail, updates with docs for diff shards but only one shard fails, etc...) -- 
but only tested against a MinSolrCloud collection that actaully had 1 node, 1 
shard, 1 replica and an HttpSolrClient talking directly to that node.  Once all 
those assertions were passing, then I changed it to use 5 nodes, 2 shards, 2 
replicas and started testing all of those scenerios against 5 HttpSolrClients 
pointed at every individual node (one of which hosts no replicas) as well as a 
ZK aware CloudSolrClient.  All 6 tests against all 6 clients currently fail 
(reliably) at some point in these scenerios.



Independent of all the things i still need to make sense of in the existing 
code to try and help get these tests passing, I still have one big question 
about what the desired/epected behavior should be for clients when maxErrors is 
exceeded -- at the moment, in single node setups, the client gets a 400 error 
with the top level "error" section corisponding with whatever error caused it 
to exceed the maxErrors, but the responseHeader is still populated with the 
individual errors and the appropraite numAdds & numErrors, for example...

{code}
$ curl -v -X POST 
'http://localhost:8983/solr/techproducts/update?indent=true=true=tolerant'
 -H 'Content-Type: application/json' --data-binary 
'[{"id":"hoss1","foo_i":42},{"id":"bogus1","foo_i":"bogus"},{"id":"hoss2","foo_i":66},{"id":"bogus2","foo_i":"bogus"},{"id":"bogus3","foo_i":"bogus"},{"id":"hoss3","foo_i":42}]'
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8983 (#0)
> POST /solr/techproducts/update?indent=true=true=tolerant 
> HTTP/1.1
> User-Agent: curl/7.38.0
> Host: localhost:8983
> Accept: */*
> Content-Type: application/json
> Content-Length: 175
> 
* upload completely sent off: 175 out of 175 bytes
< HTTP/1.1 400 Bad Request
< Content-Type: text/plain;charset=utf-8
< Transfer-Encoding: chunked
< 
{
  "responseHeader":{
"numErrors":3,
"errors":{
  "bogus1":{
"message":"ERROR: [doc=bogus1] Error adding field 'foo_i'='bogus' 
msg=For input string: \"bogus\""},
  "bogus2":{
"message":"ERROR: [doc=bogus2] Error adding field 'foo_i'='bogus' 
msg=For input string: \"bogus\""},
  "bogus3":{
"message":"ERROR: [doc=bogus3] Error adding field 'foo_i'='bogus' 
msg=For input string: \"bogus\""}},
"numAdds":2,
"status":400,
"QTime":4},
  "error":{
"msg":"ERROR: [doc=bogus3] Error adding field 'foo_i'='bogus' msg=For input 
string: \"bogus\"",
"code":400}}
* Connection #0 to host localhost left intact
{code}


Re: Lucene/Solr git mirror will soon turn off

2015-12-16 Thread Shawn Heisey
On 12/16/2015 5:53 PM, Alexandre Rafalovitch wrote:
> On 16 December 2015 at 00:44, Dawid Weiss  wrote:
>> 4) The size of JARs is really not an issue. The entire SVN repo I mirrored
>> locally (including empty interim commits to cater for svn:mergeinfos) is 4G.
>> If you strip the stuff like javadocs and side projects (Nutch, Tika, Mahout)
>> then I bet the entire history can fit in 1G total. Of course stripping JARs
>> is also doable.
> I think this answered one of the issues. So, this is not something to focus 
> on.
>
> The question I had (I am sure a very dumb one): WHY do we care about
> history preserved perfectly in Git? Because that seems to be the real
> bottleneck now. Does anybody still checks out an intermediate commit
> in Solr 1.4 branch?

I do not think we need every bit of history -- at least in the primary
read/write repository.  I wonder how much of a size difference there
would be between tossing all history before 5.0 and tossing all history
before the ivy transition was completed.

In the interests of reducing the size and download time of a clone
operation, I definitely think we should trim history in the main repo to
some arbitrary point, as long as the full history is available
elsewhere.  It's my understanding that it will remain in svn.apache.org
(possibly forever), and I think we could also create "historical"
read-only git repos.

Almost every time I am working on the code, I only care about the stable
branch and trunk.  Sometimes I will check out an older 4.x tag so I can
see the exact code referenced by a stacktrace in a user's error message,
but when this is required, I am willing to go to an entirely different
repository and chew up bandwidth/disk resourcesto obtain it, and I do
not care whether it is git or svn.  As time marches on, fewer people
will have reasons to look at the historical record.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2947 - Still Failing!

2015-12-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2947/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin

Error Message:
IOException occured when talking to server at: 
https://127.0.0.1:62499/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:62499/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([90B5BEBF3C63F193:2A67D1C7BF4D1F86]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin(TestContentStreamDataSource.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15225 - Failure!

2015-12-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15225/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

35 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.DistributedVersionInfoTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([55BC4DE644AB998B]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.LeaderFailoverAfterPartitionTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([55BC4DE644AB998B]:0)


FAILED:  org.apache.solr.DistributedIntervalFacetingTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:57213//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:57213//collection1
at 
__randomizedtesting.SeedInfo.seed([55BC4DE644AB998B:DDE8723CEA57F473]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:896)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:859)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:874)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:545)
at 
org.apache.solr.DistributedIntervalFacetingTest.test(DistributedIntervalFacetingTest.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:987)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 265 - Failure!

2015-12-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/265/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
Error from server at http://127.0.0.1:50070/collection1: Service Unavailable
request: 
http://127.0.0.1:46226/collection1/update?update.distrib=FROMLEADER=http%3A%2F%2F127.0.0.1%3A50070%2Fcollection1%2F=javabin=2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:50070/collection1: Service Unavailable



request: 
http://127.0.0.1:46226/collection1/update?update.distrib=FROMLEADER=http%3A%2F%2F127.0.0.1%3A50070%2Fcollection1%2F=javabin=2
at 
__randomizedtesting.SeedInfo.seed([E21C25032A12D2C8:6A481AD984EEBF30]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:982)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:174)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:139)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:153)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-12-16 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061581#comment-15061581
 ] 

Shalin Shekhar Mangar commented on SOLR-8220:
-

Thanks Ishan.

# The SolrIndexSearcher.decorateDocValueFields method has a 
honourUseDVsAsStoredFlag which is always true. We can remove it?
# Same for SolrIndexSearcher.getNonStoredDocValuesFieldNames?
# The wantsAllFields flag added to SolrIndexSearcher.doc doesn't seem 
necessary. I guess it was added because the patch adds non stored doc values 
fields to the 'fnames' but if we can separate out stored fnames from the 
non-stored doc values to be returned then we can remove this param from both 
SolrIndexSearcher.doc and SolrIndexSearcher.getNonStoredDocValuesFieldNames
# The pattern matching in the DocStreamer constructor makes a bit nervous. 
Where is the pattern matching done for current stored fields?

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-5x.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1048 - Still Failing

2015-12-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1048/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([81D644E8EBF31222:F7E85B9BAAC4BF0D]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:463)
at sun.nio.ch.Net.bind(Net.java:455)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:366)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:407)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:357)
at 
org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll(TestMiniSolrCloudCluster.java:421)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 

Re: 5.3.2 bug fix release

2015-12-16 Thread Anshum Gupta
Yes, there was already a 5.3.2 version in JIRA. I will start back-porting
stuff to the lucene_solr_5_3 branch later in the day today.


On Thu, Dec 17, 2015 at 11:35 AM, Noble Paul  wrote:

> Agree with Shawn here.
>
> If a company has already done the work to upgrade their systems to
> 5.3.1 , they would rather have a bug fix for the old version .
>
> So anshum, is there a 5.3.2 version created in JIRA? can we start
> tagging issues to that release so that we can have a definitive list
> of bugs to be backported
>
> On Thu, Dec 17, 2015 at 10:27 AM, Anshum Gupta 
> wrote:
> > Thanks for explaining it so well Shawn :)
> >
> > Yes, that's pretty much the reason why it makes sense to have a 5.3.2
> > release.
> >
> > We might even need a 5.4.1 after that as there are more security bug
> fixes
> > that are getting committed as the feature is being tried by users and
> bugs
> > are being reported.
> >
> > On Thu, Dec 17, 2015 at 3:28 AM, Shawn Heisey 
> wrote:
> >>
> >> On 12/16/2015 2:15 PM, Upayavira wrote:
> >> > Why don't people just upgrade to 5.4? Why do we need another release
> in
> >> > the 5.3.x range?
> >>
> >> I am using a third-party custom Solr plugin.  The latest version of that
> >> plugin (which I have on my dev server) has only been certified to work
> >> with Solr 5.3.x.  There's a chance that it won't work with 5.4, so I
> >> cannot use that version yet.  If I happen to need any of the fixes that
> >> are being backported, an official 5.3.2 release would allow me to use
> >> official binaries, which will make my managers much more comfortable
> >> than a version that I compile myself.
> >>
> >> Additionally, the IT change policies in place for many businesses
> >> require a huge amount of QA work for software upgrades, but those
> >> policies may be relaxed for hotfixes and upgrades that are *only*
> >> bugfixes.  For users operating under those policies, a bugfix release
> >> will allow them to fix bugs immediately, rather than spend several weeks
> >> validating a new minor release.
> >>
> >> There is a huge amount of interest in the new security features in
> >> 5.3.x, functionality that has a number of critical problems.  Lots of
> >> users who need those features have already deployed 5.3.1.  Many of the
> >> critical problems are fixed in 5.4, and these are the fixes that Anshum
> >> wants to make available in 5.3.2.  If a user is in either of the
> >> situations that I outlined above, upgrading to 5.4 may be unrealistic.
> >>
> >> Thanks,
> >> Shawn
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
> >
> >
> > --
> > Anshum Gupta
>
>
>
> --
> -
> Noble Paul
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


Re: 5.3.2 bug fix release

2015-12-16 Thread Anshum Gupta
Thanks for explaining it so well Shawn :)

Yes, that's pretty much the reason why it makes sense to have a 5.3.2
release.

We might even need a 5.4.1 after that as there are more security bug fixes
that are getting committed as the feature is being tried by users and bugs
are being reported.

On Thu, Dec 17, 2015 at 3:28 AM, Shawn Heisey  wrote:

> On 12/16/2015 2:15 PM, Upayavira wrote:
> > Why don't people just upgrade to 5.4? Why do we need another release in
> > the 5.3.x range?
>
> I am using a third-party custom Solr plugin.  The latest version of that
> plugin (which I have on my dev server) has only been certified to work
> with Solr 5.3.x.  There's a chance that it won't work with 5.4, so I
> cannot use that version yet.  If I happen to need any of the fixes that
> are being backported, an official 5.3.2 release would allow me to use
> official binaries, which will make my managers much more comfortable
> than a version that I compile myself.
>
> Additionally, the IT change policies in place for many businesses
> require a huge amount of QA work for software upgrades, but those
> policies may be relaxed for hotfixes and upgrades that are *only*
> bugfixes.  For users operating under those policies, a bugfix release
> will allow them to fix bugs immediately, rather than spend several weeks
> validating a new minor release.
>
> There is a huge amount of interest in the new security features in
> 5.3.x, functionality that has a number of critical problems.  Lots of
> users who need those features have already deployed 5.3.1.  Many of the
> critical problems are fixed in 5.4, and these are the fixes that Anshum
> wants to make available in 5.3.2.  If a user is in either of the
> situations that I outlined above, upgrading to 5.4 may be unrealistic.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


Re: 5.3.2 bug fix release

2015-12-16 Thread Noble Paul
Agree with Shawn here.

If a company has already done the work to upgrade their systems to
5.3.1 , they would rather have a bug fix for the old version .

So anshum, is there a 5.3.2 version created in JIRA? can we start
tagging issues to that release so that we can have a definitive list
of bugs to be backported

On Thu, Dec 17, 2015 at 10:27 AM, Anshum Gupta  wrote:
> Thanks for explaining it so well Shawn :)
>
> Yes, that's pretty much the reason why it makes sense to have a 5.3.2
> release.
>
> We might even need a 5.4.1 after that as there are more security bug fixes
> that are getting committed as the feature is being tried by users and bugs
> are being reported.
>
> On Thu, Dec 17, 2015 at 3:28 AM, Shawn Heisey  wrote:
>>
>> On 12/16/2015 2:15 PM, Upayavira wrote:
>> > Why don't people just upgrade to 5.4? Why do we need another release in
>> > the 5.3.x range?
>>
>> I am using a third-party custom Solr plugin.  The latest version of that
>> plugin (which I have on my dev server) has only been certified to work
>> with Solr 5.3.x.  There's a chance that it won't work with 5.4, so I
>> cannot use that version yet.  If I happen to need any of the fixes that
>> are being backported, an official 5.3.2 release would allow me to use
>> official binaries, which will make my managers much more comfortable
>> than a version that I compile myself.
>>
>> Additionally, the IT change policies in place for many businesses
>> require a huge amount of QA work for software upgrades, but those
>> policies may be relaxed for hotfixes and upgrades that are *only*
>> bugfixes.  For users operating under those policies, a bugfix release
>> will allow them to fix bugs immediately, rather than spend several weeks
>> validating a new minor release.
>>
>> There is a huge amount of interest in the new security features in
>> 5.3.x, functionality that has a number of critical problems.  Lots of
>> users who need those features have already deployed 5.3.1.  Many of the
>> critical problems are fixed in 5.4, and these are the fixes that Anshum
>> wants to make available in 5.3.2.  If a user is in either of the
>> situations that I outlined above, upgrading to 5.4 may be unrealistic.
>>
>> Thanks,
>> Shawn
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
> Anshum Gupta



-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 263 - Failure!

2015-12-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/263/
Java: multiarch/jdk1.7.0 -d64 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.test

Error Message:
QUERY FAILED: 
xpath=/response/arr[@name='fields']/lst/str[@name='name'][.='newTestFieldInt447']
  request=/schema/fields?wt=xml  response=  0   22 
 _root_ string 
true false 
true   _version_ long true true   
constantField tdouble   
id string  
   true false   
  true true 
true   newTestFieldInt0 tlong  
 newTestFieldInt1 tlong   newTestFieldInt10 tlong  
 newTestFieldInt100 tlong   newTestFieldInt101 tlong 
  newTestFieldInt102 tlong   newTestFieldInt103 tlong 
  newTestFieldInt104 tlong   newTestFieldInt105 tlong 
  newTestFieldInt106 tlong   newTestFieldInt107 tlong 
  newTestFieldInt108 tlong   newTestFieldInt109 tlong 
  newTestFieldInt11 tlong   newTestFieldInt110 tlong 
  newTestFieldInt111 tlong   newTestFieldInt112 tlong 
  newTestFieldInt113 tlong   newTestFieldInt114 tlong 
  newTestFieldInt115 tlong   newTestFieldInt116 tlong 
  newTestFieldInt117 tlong   newTestFieldInt118 tlong 
  newTestFieldInt119 tlong   newTestFieldInt12 tlong  
 newTestFieldInt120 tlong   newTestFieldInt121 tlong 
  newTestFieldInt122 tlong   newTestFieldInt123 tlong 
  newTestFieldInt124 tlong   newTestFieldInt125 tlong 
  newTestFieldInt126 tlong   newTestFieldInt127 tlong 
  newTestFieldInt128 tlong   newTestFieldInt129 tlong 
  newTestFieldInt13 tlong   newTestFieldInt130 tlong 
  newTestFieldInt131 tlong   newTestFieldInt132 tlong 
  newTestFieldInt133 tlong   newTestFieldInt134 tlong 
  newTestFieldInt135 tlong   newTestFieldInt136 tlong 
  newTestFieldInt137 tlong   newTestFieldInt138 tlong 
  newTestFieldInt139 tlong   newTestFieldInt14 tlong  
 newTestFieldInt140 tlong   newTestFieldInt141 tlong 
  newTestFieldInt142 tlong   newTestFieldInt143 tlong 
  newTestFieldInt144 tlong   newTestFieldInt145 tlong 
  newTestFieldInt146 tlong   newTestFieldInt147 tlong 
  newTestFieldInt148 tlong   newTestFieldInt149 tlong 
  newTestFieldInt15 tlong   newTestFieldInt150 tlong 
  newTestFieldInt151 tlong   newTestFieldInt152 tlong 
  newTestFieldInt153 tlong   newTestFieldInt154 tlong 
  newTestFieldInt155 tlong   newTestFieldInt156 tlong 
  newTestFieldInt157 tlong   newTestFieldInt158 tlong 
  newTestFieldInt159 tlong   newTestFieldInt16 tlong  
 newTestFieldInt160 tlong   newTestFieldInt161 tlong 
  newTestFieldInt162 tlong   newTestFieldInt163 tlong 
  newTestFieldInt164 tlong   newTestFieldInt165 tlong 
  newTestFieldInt166 tlong   newTestFieldInt167 tlong 
  newTestFieldInt168 tlong   newTestFieldInt169 tlong 
  newTestFieldInt17 tlong   newTestFieldInt170 tlong 
  newTestFieldInt171 tlong   newTestFieldInt172 tlong 
  newTestFieldInt173 tlong   newTestFieldInt174 tlong 
  newTestFieldInt175 tlong   newTestFieldInt176 tlong 
  newTestFieldInt177 tlong   newTestFieldInt178 tlong 
  newTestFieldInt179 tlong   newTestFieldInt18 tlong  
 newTestFieldInt180 tlong   newTestFieldInt181 tlong 
  newTestFieldInt182 tlong   newTestFieldInt183 tlong 
  newTestFieldInt184 tlong   newTestFieldInt185 tlong 
  newTestFieldInt186 tlong   newTestFieldInt187 tlong 
  newTestFieldInt188 tlong   newTestFieldInt189 tlong 
  newTestFieldInt19 tlong   newTestFieldInt190 tlong 
  newTestFieldInt191 tlong   newTestFieldInt192 tlong 
  newTestFieldInt193 tlong   newTestFieldInt194 tlong 
  newTestFieldInt195 tlong   newTestFieldInt196 tlong 
  newTestFieldInt197 tlong   newTestFieldInt198 tlong 
  newTestFieldInt199 

[jira] [Updated] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data.

2015-12-16 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8279:
--
Attachment: SOLR-8279.patch

All the nocommits and such are out, I think this is pretty close, still need to 
go over it once more.

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data.
> ---
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data with fault injection.

2015-12-16 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8279:
--
Summary: Add a new SolrCloud test that stops and starts the cluster while 
indexing data with fault injection.  (was: Add a new SolrCloud test that stops 
and starts the cluster while indexing data.)

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data with fault injection.
> 
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-16 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061319#comment-15061319
 ] 

Gregory Chanan commented on SOLR-8416:
--

I'd see what hbase or other systems do.

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 703 - Failure

2015-12-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/703/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:48794/p/cz/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:48794/p/cz/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([7E65F17D7D610108:F631CEA7D39D6CF0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:638)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8429) add a flag blockUnauthenticated to BasicAutPlugin

2015-12-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061360#comment-15061360
 ] 

Noble Paul commented on SOLR-8429:
--

I don't think we need to change the default and change the default behavior. 
All we need to do is change the example and add this flag there. So everyone 
who use this feature will see the flag. If we put in the default nobody will 
know this. 

The point about security is that there are a lot of users who have solr without 
security and they would just want to have minimal security. This would be to 
just avoid certain operations being performed inadvertently. So, security is a 
mechanism to protect their solr from themselves

> add a flag blockUnauthenticated to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-16 Thread david.w.smi...@gmail.com
+1 totally agree.  Any way; the bloat should largely be the binaries &
unrelated projects, not code (small text files).

On Wed, Dec 16, 2015 at 10:36 PM Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:

> In defense of more history immediately available--it is often far more
> useful to poke around code history/run blame to figure out some code than
> by taking it at face value. Putting this in a secondary place like
> Apache SVN repo IMO reduces the readability of the code itself. This is
> doubly true for new developers that won't know about Apache's SVN. And
> Lucene can be quite intricate code. Further in my own work poking around in
> github mirrors I frequently hit the current cutoff. Which is one reason I
> stopped using them for anything but the casual investigation.
>
> I'm not totally against a cutoff point, but I'd advocate for exhausting
> other options first, such as trimming out unrelated projects, binaries, etc.
>
> -Doug
>
>
> On Wednesday, December 16, 2015, Shawn Heisey  wrote:
>
>> On 12/16/2015 5:53 PM, Alexandre Rafalovitch wrote:
>> > On 16 December 2015 at 00:44, Dawid Weiss 
>> wrote:
>> >> 4) The size of JARs is really not an issue. The entire SVN repo I
>> mirrored
>> >> locally (including empty interim commits to cater for svn:mergeinfos)
>> is 4G.
>> >> If you strip the stuff like javadocs and side projects (Nutch, Tika,
>> Mahout)
>> >> then I bet the entire history can fit in 1G total. Of course stripping
>> JARs
>> >> is also doable.
>> > I think this answered one of the issues. So, this is not something to
>> focus on.
>> >
>> > The question I had (I am sure a very dumb one): WHY do we care about
>> > history preserved perfectly in Git? Because that seems to be the real
>> > bottleneck now. Does anybody still checks out an intermediate commit
>> > in Solr 1.4 branch?
>>
>> I do not think we need every bit of history -- at least in the primary
>> read/write repository.  I wonder how much of a size difference there
>> would be between tossing all history before 5.0 and tossing all history
>> before the ivy transition was completed.
>>
>> In the interests of reducing the size and download time of a clone
>> operation, I definitely think we should trim history in the main repo to
>> some arbitrary point, as long as the full history is available
>> elsewhere.  It's my understanding that it will remain in svn.apache.org
>> (possibly forever), and I think we could also create "historical"
>> read-only git repos.
>>
>> Almost every time I am working on the code, I only care about the stable
>> branch and trunk.  Sometimes I will check out an older 4.x tag so I can
>> see the exact code referenced by a stacktrace in a user's error message,
>> but when this is required, I am willing to go to an entirely different
>> repository and chew up bandwidth/disk resourcesto obtain it, and I do
>> not care whether it is git or svn.  As time marches on, fewer people
>> will have reasons to look at the historical record.
>>
>> Thanks,
>> Shawn
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
> --
> *Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections
> , LLC | 240.476.9983
> Author: Relevant Search 
> This e-mail and all contents, including attachments, is considered to be
> Company Confidential unless explicitly stated otherwise, regardless
> of whether attachments are marked as such.
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-8429) add a flag blockUnauthenticated to BasicAutPlugin

2015-12-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061208#comment-15061208
 ] 

Jan Høydahl commented on SOLR-8429:
---

Let's make it default to true from 5.5, aligning with what people expect after 
enabling auth in any piece of software. We can fix back-compat using 
{{luceneMatchVersion}}, or I'm also OK with treating this as a Bug, documenting 
the change in CHANGES, since the refGuide does not even mention the current 
behavior.

Is it at all possible with 5.4 to make BasicAuth work without also specifying 
authorization?

> add a flag blockUnauthenticated to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-16 Thread Upayavira


On Thu, Dec 17, 2015, at 12:53 AM, Alexandre Rafalovitch wrote:
> On 16 December 2015 at 00:44, Dawid Weiss  wrote:
> > 4) The size of JARs is really not an issue. The entire SVN repo I mirrored
> > locally (including empty interim commits to cater for svn:mergeinfos) is 4G.
> > If you strip the stuff like javadocs and side projects (Nutch, Tika, Mahout)
> > then I bet the entire history can fit in 1G total. Of course stripping JARs
> > is also doable.
> 
> I think this answered one of the issues. So, this is not something to
> focus on.
> 
> The question I had (I am sure a very dumb one): WHY do we care about
> history preserved perfectly in Git? Because that seems to be the real
> bottleneck now. Does anybody still checks out an intermediate commit
> in Solr 1.4 branch? Is this primary for attribution? As a straw man
> proposal, if we saved _every_ revision in some sort of expanded form
> that preserves the history and git only contained release checkpoints
> for Solr 1 and 3, what are we loosing? This feels - even to me - like
> a "lore" question as opposed to something on the solution's critical
> path. But perhaps it will trigger some useful thought.

As I keep repeating - we simply cannot delete our history from SVN - it
will be preserved for as long as Apache has an SVN repo.

I sense Dawid gets that what we need is a *functional* repo. Let's see
what he can make for us. The more history we have the better, but only
so far as it stays workable.

Upayavira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5477 - Still Failing!

2015-12-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5477/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.search.AnalyticsMergeStrategyTest.test

Error Message:
Error from server at https://127.0.0.1:53866//collection1: 
java.util.concurrent.ExecutionException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:53872//collection1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:53866//collection1: 
java.util.concurrent.ExecutionException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:53872//collection1
at 
__randomizedtesting.SeedInfo.seed([37CC69088F5697DA:BF9856D221AAFA22]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:562)
at 
org.apache.solr.search.AnalyticsMergeStrategyTest.test(AnalyticsMergeStrategyTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

  1   2   >