Re: Release planning for 7.0

2017-06-28 Thread Anshum Gupta
Hi Christine,

With my current progress, which is much slower that how I'd have liked it
to be, I think there is still a day before the branches are cut. How far
out do you think you are with this?

-Anshum

On Wed, Jun 28, 2017 at 9:59 AM Uwe Schindler  wrote:

> Hi Anshum,
>
>
>
> I have a häckidihickhäck workaround for the Hadoop Solr 9 issue. It is
> already committed to master and 6.x branch, so the issue is fixed:
> https://issues.apache.org/jira/browse/SOLR-10966
>
>
>
> I lowered the Hadoop-Update (
> https://issues.apache.org/jira/browse/SOLR-10951) issue to “Major” level,
> so it is no longer blocker.
>
>
>
> Nevertheless, we should fix the startup scripts for Java 9 in master
> before release of Solr 7, because currently the shell scripts fail (on
> certain platforms). And Java 9 is coming soon, so we should really have
> support because the speed improvements are a main reason to move to Java 9
> with your Solr servers.
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Anshum Gupta [mailto:ans...@anshumgupta.net]
> *Sent:* Sunday, June 25, 2017 7:52 PM
>
>
> *To:* dev@lucene.apache.org
> *Subject:* Re: Release planning for 7.0
>
>
>
> Hi Uwe,
>
>
>
> +1 on getting SOLR-10951
>  in before the release
> but I assume you weren't hinting at holding back the branch creation :).
>
>
>
> I am not well versed with that stuff so it would certainly be optimal for
> someone else to look at that.
>
>
>
> -Anshum
>
> On Sun, Jun 25, 2017 at 9:58 AM Uwe Schindler  wrote:
>
> Hi,
>
>
>
> currently we have the following problem:
>
>- The first Java 9 release candidate came out. This one now uses the
>final version format. The string returned by ${java.version} is now plain
>simple “9” – bummer for one single 3rd party library!
>- This breaks one of the most basic Hadoop classes, so anything in
>Solr that refers somehow to Hadoop breaks. Of course this is HDFS - but
>also authentication! We should support Java 9, so we should really fix this
>ASAP!
>
>
>
> From now on all tests running with Java 9 fail on Jenkins until we fix the
> following:
>
>- Get an Update from Hadoop Guys (2.7.4), with just the stupid check
>removed (the completely useless version checking code snipped already makes
>its round through twitter):
>https://issues.apache.org/jira/browse/HADOOP-14586
>- Or we update at least master/7.0 to latest Hadoop version, which has
>the bug already fixed. Unfortunately this does not work, as there is a bug
>in the Hadoop MiniDFSCluster that hangs on test shutdown. I have no idea
>how to fix. See https://issues.apache.org/jira/browse/SOLR-10951
>
>
>
> I’d prefer to fix https://issues.apache.org/jira/browse/SOLR-10951 for
> master before release, so I set it as blocker. I am hoping for hely by Mark
> Miller. If the hadoop people have a simple bugfix release for the earlier
> version, we may also be able to fix branch_6x and branch_6_6 (but I
> disabled them on Jenkins anyways).
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Anshum Gupta [mailto:ans...@anshumgupta.net]
> *Sent:* Saturday, June 24, 2017 10:52 PM
>
>
> *To:* dev@lucene.apache.org
> *Subject:* Re: Release planning for 7.0
>
>
>
> I'll create the 7x, and 7.0 branches *tomorrow*.
>
>
>
> Ishan, do you mean you would be able to close it by Tuesday? You would
> have to commit to both 7.0, and 7.x, in addition to master, but I think
> that should be ok.
>
>
>
> We also have SOLR-10803 open at this moment and we'd need to come to a
> decision on that as well in order to move forward with 7.0.
>
>
>
> P.S: If there are any objections to this plan, kindly let me know.
>
>
>
> -Anshum
>
>
>
> On Fri, Jun 23, 2017 at 5:03 AM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
> Hi Anshum,
>
>
>
> > I will send out an email a day before cutting the branch, as well as
> once the branch is in place.
>
> I'm right now on travel, and unable to finish SOLR-10574 until Monday
> (possibly Tuesday).
>
> Regards,
>
> Ishan
>
>
>
> On Tue, Jun 20, 2017 at 5:08 PM, Anshum Gupta 
> wrote:
>
> From my understanding, there's not really a 'plan' but some intention to
> release a 6.7 at some time if enough people need it, right? In that case I
> wouldn't hold back anything for a 6x line release and cut the 7x, and 7.0
> branches around, but not before the coming weekend. I will send out an
> email a day before cutting the branch, as well as once the branch is in
> place.
>
>
>
> If anyone has any objections to that, do let me know.
>
>
>
> Once that happens, we'd have a feature freeze on the 7.0 branch but we can
> take our time to iron out the bugs.
>
>
>
> @Alan: Thanks for informing. I'll make sure that LUCENE-7877 is committed
> before I cut the branc

Re: Release planning for 7.0

2017-06-28 Thread Anshum Gupta
Thanks Uwe :)

On Wed, Jun 28, 2017 at 9:59 AM Uwe Schindler  wrote:

> Hi Anshum,
>
>
>
> I have a häckidihickhäck workaround for the Hadoop Solr 9 issue. It is
> already committed to master and 6.x branch, so the issue is fixed:
> https://issues.apache.org/jira/browse/SOLR-10966
>
>
>
> I lowered the Hadoop-Update (
> https://issues.apache.org/jira/browse/SOLR-10951) issue to “Major” level,
> so it is no longer blocker.
>
>
>
> Nevertheless, we should fix the startup scripts for Java 9 in master
> before release of Solr 7, because currently the shell scripts fail (on
> certain platforms). And Java 9 is coming soon, so we should really have
> support because the speed improvements are a main reason to move to Java 9
> with your Solr servers.
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Anshum Gupta [mailto:ans...@anshumgupta.net]
> *Sent:* Sunday, June 25, 2017 7:52 PM
>
>
> *To:* dev@lucene.apache.org
> *Subject:* Re: Release planning for 7.0
>
>
>
> Hi Uwe,
>
>
>
> +1 on getting SOLR-10951
>  in before the release
> but I assume you weren't hinting at holding back the branch creation :).
>
>
>
> I am not well versed with that stuff so it would certainly be optimal for
> someone else to look at that.
>
>
>
> -Anshum
>
> On Sun, Jun 25, 2017 at 9:58 AM Uwe Schindler  wrote:
>
> Hi,
>
>
>
> currently we have the following problem:
>
>- The first Java 9 release candidate came out. This one now uses the
>final version format. The string returned by ${java.version} is now plain
>simple “9” – bummer for one single 3rd party library!
>- This breaks one of the most basic Hadoop classes, so anything in
>Solr that refers somehow to Hadoop breaks. Of course this is HDFS - but
>also authentication! We should support Java 9, so we should really fix this
>ASAP!
>
>
>
> From now on all tests running with Java 9 fail on Jenkins until we fix the
> following:
>
>- Get an Update from Hadoop Guys (2.7.4), with just the stupid check
>removed (the completely useless version checking code snipped already makes
>its round through twitter):
>https://issues.apache.org/jira/browse/HADOOP-14586
>- Or we update at least master/7.0 to latest Hadoop version, which has
>the bug already fixed. Unfortunately this does not work, as there is a bug
>in the Hadoop MiniDFSCluster that hangs on test shutdown. I have no idea
>how to fix. See https://issues.apache.org/jira/browse/SOLR-10951
>
>
>
> I’d prefer to fix https://issues.apache.org/jira/browse/SOLR-10951 for
> master before release, so I set it as blocker. I am hoping for hely by Mark
> Miller. If the hadoop people have a simple bugfix release for the earlier
> version, we may also be able to fix branch_6x and branch_6_6 (but I
> disabled them on Jenkins anyways).
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Anshum Gupta [mailto:ans...@anshumgupta.net]
> *Sent:* Saturday, June 24, 2017 10:52 PM
>
>
> *To:* dev@lucene.apache.org
> *Subject:* Re: Release planning for 7.0
>
>
>
> I'll create the 7x, and 7.0 branches *tomorrow*.
>
>
>
> Ishan, do you mean you would be able to close it by Tuesday? You would
> have to commit to both 7.0, and 7.x, in addition to master, but I think
> that should be ok.
>
>
>
> We also have SOLR-10803 open at this moment and we'd need to come to a
> decision on that as well in order to move forward with 7.0.
>
>
>
> P.S: If there are any objections to this plan, kindly let me know.
>
>
>
> -Anshum
>
>
>
> On Fri, Jun 23, 2017 at 5:03 AM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
> Hi Anshum,
>
>
>
> > I will send out an email a day before cutting the branch, as well as
> once the branch is in place.
>
> I'm right now on travel, and unable to finish SOLR-10574 until Monday
> (possibly Tuesday).
>
> Regards,
>
> Ishan
>
>
>
> On Tue, Jun 20, 2017 at 5:08 PM, Anshum Gupta 
> wrote:
>
> From my understanding, there's not really a 'plan' but some intention to
> release a 6.7 at some time if enough people need it, right? In that case I
> wouldn't hold back anything for a 6x line release and cut the 7x, and 7.0
> branches around, but not before the coming weekend. I will send out an
> email a day before cutting the branch, as well as once the branch is in
> place.
>
>
>
> If anyone has any objections to that, do let me know.
>
>
>
> Once that happens, we'd have a feature freeze on the 7.0 branch but we can
> take our time to iron out the bugs.
>
>
>
> @Alan: Thanks for informing. I'll make sure that LUCENE-7877 is committed
> before I cut the branch. I have added the right fixVersion to the issue.
>
>
>
> -Anshum
>
>
>
>
>
>
>
> On Mon, Jun 19, 2017 at 8:33 AM Erick Erickson 
> wrote:
>
> Anshum:
>
> I'm one of the people that expect a 6

Increasing ASF Jenkins bandwidth

2017-06-28 Thread Steve Rowe
In an offline discussion, Cassandra Targett pointed out to me that the INFRA 
issue set up to provision an additional Jenkins node for the Lucene project 
 has been closed as Won’t 
Fix, because:

> Per [~gstein] we will not be provisioning any more project-specific build 
> nodes on our infrastructure. If they wish to provide resources, we can 
> connect them to our master like Cassandra, etc. 

I think that if no organization is willing to provide Jenkins hardware, we 
should consider figuring out how to run Lucene/Solr tests on ASF’s 
non-project-specific nodes.

Uwe (or anybody else), do you have any thoughts about this?

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10968) Collection Backup API call fails with exception

2017-06-28 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067065#comment-16067065
 ] 

Varun Thacker commented on SOLR-10968:
--

Hi Rohit,

Patch looks good! Few comments

- The general convention is to upload a patch with SOLR-10968.patch and not 
10968.patch 
- It seems like the patch also contains changes for SOLR-10969 . Can you please 
remove the changes from the patch here so that we can tackle it differently?
- In SolrCloud if you call a commit against a core ( 1 replica ) of a 
collection it gets forwarded to all the replicas.


Example:

{code}
> ./bin/solr start -e cloud -noprompt
> Issue a commit against one core: 
> http://localhost:8983/solr/gettingstarted_shard1_replica1/update?commit=true


>From node1:

INFO  - 2017-06-28 19:09:46.419; [c:gettingstarted s:shard2 r:core_node1 
x:gettingstarted_shard2_replica1] 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; 
[gettingstarted_shard2_replica1]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&update.chain=add-unknown-fields-to-the-schema&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=http://192.168.0.3:8983/solr/gettingstarted_shard1_replica1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 3
INFO  - 2017-06-28 19:09:46.419; [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica1] 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; 
[gettingstarted_shard1_replica1]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&update.chain=add-unknown-fields-to-the-schema&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=http://192.168.0.3:8983/solr/gettingstarted_shard1_replica1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 3
INFO  - 2017-06-28 19:09:46.432; [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica1] 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; 
[gettingstarted_shard1_replica1]  webapp=/solr path=/update 
params={commit=true}{commit=} 0 34


>From node2 logs:

INFO  - 2017-06-28 19:09:46.429; [c:gettingstarted s:shard2 r:core_node3 
x:gettingstarted_shard2_replica2] 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; 
[gettingstarted_shard2_replica2]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&update.chain=add-unknown-fields-to-the-schema&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=http://192.168.0.3:8983/solr/gettingstarted_shard1_replica1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 13
INFO  - 2017-06-28 19:09:46.429; [c:gettingstarted s:shard1 r:core_node4 
x:gettingstarted_shard1_replica2] 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; 
[gettingstarted_shard1_replica2]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&update.chain=add-unknown-fields-to-the-schema&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=http://192.168.0.3:8983/solr/gettingstarted_shard1_replica1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 12
{code}

> Collection Backup API call fails with exception
> ---
>
> Key: SOLR-10968
> URL: https://issues.apache.org/jira/browse/SOLR-10968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 6.1, 6.2, 6.3, 6.4, 6.5, 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>Assignee: Varun Thacker
>Priority: Minor
>  Labels: Backup, Solr_Cloud
> Attachments: 10968.patch
>
>
> Backup API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-backup)
>  fails with exception: 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not backup all replicas"
> Steps to reproduce the issue
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> {color:#14892c}http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr{color}
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   
> {color:#14892c}"response":{"numFou

[jira] [Updated] (SOLR-10968) Collection Backup API call fails with exception

2017-06-28 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-10968:
-
Component/s: (was: SolrCloud)
 Backup/Restore

> Collection Backup API call fails with exception
> ---
>
> Key: SOLR-10968
> URL: https://issues.apache.org/jira/browse/SOLR-10968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 6.1, 6.2, 6.3, 6.4, 6.5, 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>Assignee: Varun Thacker
>Priority: Minor
>  Labels: Backup, Solr_Cloud
> Attachments: 10968.patch
>
>
> Backup API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-backup)
>  fails with exception: 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not backup all replicas"
> Steps to reproduce the issue
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> {color:#14892c}http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr{color}
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   
> {color:#14892c}"response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[{color}
>   {
> "id":"doc1",
> "_version_":1570643322182041600},
>   {
> "id":"doc2",
> "_version_":1570643322185187328},
>   {
> "id":"doc3",
> "_version_":1570643322185187329},
>   {
> "id":"doc5",
> "_version_":1570643322188333056},
>   {
> "id":"doc6",
> "_version_":1570643322191478784},
>   {
> "id":"doc7",
> "_version_":1570643322191478785},
>   {
> "id":"doc8",
> "_version_":1570643322191478786},
>   {
> "id":"doc4",
> "_version_":157064332217998}]
>   }}
> 2. Try to create a backup of the collection with only 8 documents:
> {
>   "responseHeader":{
> "status":500,
> "QTime":20},
> {color:#14892c}  "failure":{
> 
> "192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://192.168.3.15:8983/solr: Failed to backup 
> core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
> /Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not backup all replicas",
>   "exception":{
> "msg":"Could not backup all replicas",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Could not backup all replicas",
> "trace":"org.apache.solr.common.SolrException: Could not backup all 
> replicas\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.Conte

[jira] [Updated] (SOLR-10968) Collection Backup API call fails with exception

2017-06-28 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-10968:
-
Affects Version/s: 6.1
   6.2
   6.3
   6.4
   6.5

> Collection Backup API call fails with exception
> ---
>
> Key: SOLR-10968
> URL: https://issues.apache.org/jira/browse/SOLR-10968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 6.1, 6.2, 6.3, 6.4, 6.5, 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>Assignee: Varun Thacker
>Priority: Minor
>  Labels: Backup, Solr_Cloud
> Attachments: 10968.patch
>
>
> Backup API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-backup)
>  fails with exception: 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not backup all replicas"
> Steps to reproduce the issue
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> {color:#14892c}http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr{color}
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   
> {color:#14892c}"response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[{color}
>   {
> "id":"doc1",
> "_version_":1570643322182041600},
>   {
> "id":"doc2",
> "_version_":1570643322185187328},
>   {
> "id":"doc3",
> "_version_":1570643322185187329},
>   {
> "id":"doc5",
> "_version_":1570643322188333056},
>   {
> "id":"doc6",
> "_version_":1570643322191478784},
>   {
> "id":"doc7",
> "_version_":1570643322191478785},
>   {
> "id":"doc8",
> "_version_":1570643322191478786},
>   {
> "id":"doc4",
> "_version_":157064332217998}]
>   }}
> 2. Try to create a backup of the collection with only 8 documents:
> {
>   "responseHeader":{
> "status":500,
> "QTime":20},
> {color:#14892c}  "failure":{
> 
> "192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://192.168.3.15:8983/solr: Failed to backup 
> core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
> /Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not backup all replicas",
>   "exception":{
> "msg":"Could not backup all replicas",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Could not backup all replicas",
> "trace":"org.apache.solr.common.SolrException: Could not backup all 
> replicas\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHan

[jira] [Assigned] (SOLR-10968) Collection Backup API call fails with exception

2017-06-28 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-10968:


Assignee: Varun Thacker

> Collection Backup API call fails with exception
> ---
>
> Key: SOLR-10968
> URL: https://issues.apache.org/jira/browse/SOLR-10968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>Assignee: Varun Thacker
>Priority: Minor
>  Labels: Backup, Solr_Cloud
> Attachments: 10968.patch
>
>
> Backup API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-backup)
>  fails with exception: 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not backup all replicas"
> Steps to reproduce the issue
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> {color:#14892c}http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr{color}
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   
> {color:#14892c}"response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[{color}
>   {
> "id":"doc1",
> "_version_":1570643322182041600},
>   {
> "id":"doc2",
> "_version_":1570643322185187328},
>   {
> "id":"doc3",
> "_version_":1570643322185187329},
>   {
> "id":"doc5",
> "_version_":1570643322188333056},
>   {
> "id":"doc6",
> "_version_":1570643322191478784},
>   {
> "id":"doc7",
> "_version_":1570643322191478785},
>   {
> "id":"doc8",
> "_version_":1570643322191478786},
>   {
> "id":"doc4",
> "_version_":157064332217998}]
>   }}
> 2. Try to create a backup of the collection with only 8 documents:
> {
>   "responseHeader":{
> "status":500,
> "QTime":20},
> {color:#14892c}  "failure":{
> 
> "192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://192.168.3.15:8983/solr: Failed to backup 
> core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
> /Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not backup all replicas",
>   "exception":{
> "msg":"Could not backup all replicas",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Could not backup all replicas",
> "trace":"org.apache.solr.common.SolrException: Could not backup all 
> replicas\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
>  
> org.ecli

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 797 - Failure

2017-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/797/

No tests ran.

Build Log:
[...truncated 25697 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (35.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 29.5 MB in 0.03 sec (960.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 69.0 MB in 0.06 sec (1153.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 79.3 MB in 0.07 sec (1168.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6163 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6163 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (261.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 50.0 MB in 0.05 sec (1058.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 141.2 MB in 0.13 sec (1073.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 142.2 MB in 0.13 sec (1077.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=31296). Happy searching!
   [sm

[jira] [Commented] (SOLR-10353) TestSQLHandler reproducible failure: No match found for function signature min()

2017-06-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067015#comment-16067015
 ] 

David Smiley commented on SOLR-10353:
-

Lets report this upstream to Calcite.

> TestSQLHandler reproducible failure: No match found for function signature 
> min()
> -
>
> Key: SOLR-10353
> URL: https://issues.apache.org/jira/browse/SOLR-10353
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Reporter: Hoss Man
>Assignee: Joel Bernstein
>
> found this while doing jdk9 testing, but the seed reproduces with jdk8 as 
> well...
> {noformat}
> hossman@tray:~/lucene/dev/solr/core [master] $ git rev-parse HEAD
> c221ef0fdedaa92885746b3073150f0bd558f596
> hossman@tray:~/lucene/dev/solr/core [master] $ ant test  
> -Dtestcase=TestSQLHandler -Dtests.method=doTest -Dtests.seed=D778831206956D34 
> -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=az-Cyrl-AZ 
> -Dtests.timezone=America/Cayman -Dtests.asserts=true 
> -Dtests.file.encoding=ANSI_X3.4-1968
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=D778831206956D34 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=America/Cayman 
> -Dtests.asserts=true -Dtests.file.encoding=ANSI_X3.4-1968
>[junit4] ERROR   28.0s | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: --> 
> http://127.0.0.1:37402/collection1:Failed to execute sqlQuery 'select str_s, 
> count(*), sum(field_i), min(field_i), max(field_i), cast(avg(1.0 * field_i) 
> as float) from collection1 where text='' group by str_s order by 
> sum(field_i) asc limit 2' against JDBC connection 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select str_s, count(*), 
> sum(field_i), min(field_i), max(field_i), cast(avg(1.0 * field_i) as float) 
> from collection1 where text='' group by str_s order by sum(field_i) asc 
> limit 2": From line 1, column 39 to line 1, column 50: No match found for 
> function signature min()
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D778831206956D34:703C3BB66B2E7E8D]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:235)
>[junit4]>  at 
> org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2349)
>[junit4]>  at 
> org.apache.solr.handler.TestSQLHandler.testBasicGrouping(TestSQLHandler.java:675)
>[junit4]>  at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:90)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7890) MemoryIndex should allow doc values iterator to be reset to the current docid

2017-06-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067006#comment-16067006
 ] 

David Smiley commented on LUCENE-7890:
--

+1

> MemoryIndex should allow doc values iterator to be reset to the current docid
> -
>
> Key: LUCENE-7890
> URL: https://issues.apache.org/jira/browse/LUCENE-7890
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0)
>Reporter: Martijn van Groningen
> Attachments: LUCENE-7890.patch
>
>
> The `SortedSetDocValues` and `SortedNumericDocValues` instances returned by 
> the MemoryIndex should support subsequent `advanceExact(0)` invocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10123) Analytics Component 2.0

2017-06-28 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067007#comment-16067007
 ] 

Dennis Gove commented on SOLR-10123:


We had some issues with some completely unrelated tests failing when running 
{code}ant clean test{code} Sometimes when we ran the full test suite varied 
sets of tests would fail, but re-running with the seed would see those tests 
then pass. Both Houston and I are of the opinion that because Analytics is a 
contrib module, there was no rhyme or reason to which tests failed or why, and 
that the failures we'd see are completely unrelated to analytics that the 
failures are unrelated to this code change. We did also see many full test 
suite runs which showed *no* failures.

I cannot say for 100%, however, so I want to document it here. Houston will be 
watching the daily build/test log and will investigate any related failures.

{code}ant precommit{code} does pass.

> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7890) MemoryIndex should allow doc values iterator to be reset to the current docid

2017-06-28 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066993#comment-16066993
 ] 

Alan Woodward commented on LUCENE-7890:
---

+1

> MemoryIndex should allow doc values iterator to be reset to the current docid
> -
>
> Key: LUCENE-7890
> URL: https://issues.apache.org/jira/browse/LUCENE-7890
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0)
>Reporter: Martijn van Groningen
> Attachments: LUCENE-7890.patch
>
>
> The `SortedSetDocValues` and `SortedNumericDocValues` instances returned by 
> the MemoryIndex should support subsequent `advanceExact(0)` invocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10123) Analytics Component 2.0

2017-06-28 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-10123:
---
Attachment: SOLR-10123.patch

This patch version has been applied to master. Houston will be updating this 
ticket with additional documentation and comments describing this change.

> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7890) MemoryIndex should allow doc values iterator to be reset to the current docid

2017-06-28 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated LUCENE-7890:
--
Attachment: LUCENE-7890.patch

Attached patch with fix and a test.

> MemoryIndex should allow doc values iterator to be reset to the current docid
> -
>
> Key: LUCENE-7890
> URL: https://issues.apache.org/jira/browse/LUCENE-7890
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0)
>Reporter: Martijn van Groningen
> Attachments: LUCENE-7890.patch
>
>
> The `SortedSetDocValues` and `SortedNumericDocValues` instances returned by 
> the MemoryIndex should support subsequent `advanceExact(0)` invocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4112 - Still Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4112/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 5 object(s) that were not released!!! [InternalHttpClient, 
MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:289)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:298)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:220)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:254)  at 
org.apache.solr.handler.ReplicationHandler.inform(ReplicationHandler.java:1213) 
 at org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:697) 
 at org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:636)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1204)  at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:896) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:480)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:328) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:419) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$12(ReplicationHandler.java:1183)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:480)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:328) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:419) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$12(ReplicationHandler.java:1183)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1019)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:636)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1204)  at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:896) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:478)  
at org.apache.solr.core.SolrCore.(SolrCore.java:928)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:843)  at 
org.apache.solr.core.CoreContainer.create(Cor

Re: Release planning for 7.0

2017-06-28 Thread Christine Poerschke (BLOOMBERG/ LONDON)
I'd be interested in https://issues.apache.org/jira/browse/SOLR-10962 making it 
into 7.0 if there's time but wouldn't want to block progress.

Christine

From: dev@lucene.apache.org At: 06/27/17 07:30:47
To: dev@lucene.apache.org
Subject: Re: Release planning for 7.0

Erick, sure. If you think it'd take longer, could you mark that as a blocker so 
I hold back moving ahead with the release process?

-Anshum
On Mon, Jun 26, 2017 at 11:28 PM Anshum Gupta  wrote:

Hi Alan, 

Sorry for the delay in replying but go ahead and commit this. I'm still trying 
to work through the 8.0 version bump test failures. If you don't make it by the 
time I push the branches, kindly commit to all the branches, else I'll make 
sure that your commit makes it to all of those.

-Anshum
On Mon, Jun 26, 2017 at 3:21 AM Erik Hatcher  wrote:

I will get https://issues.apache.org/jira/browse/SOLR-10874 into 7.0 and branch 
6x in the next few days - I’ll merge to whatever branches are needed at the 
time.

   Erik


On Jun 19, 2017, at 10:45 AM, Anshum Gupta  wrote:
Hi everyone,

Here's the update about 7.0 release:

There are still  unresolved blockers for 7.0. 
Solr (12):
https://issues.apache.org/jira/browse/SOLR-6630?jql=project%20%3D%20Solr%20AND%20fixVersion%20%3D%20%22master%20(7.0)%22%20and%20resolution%20%3D%20Unresolved%20and%20priority%20%3D%20Blocker

Lucene (None):
https://issues.apache.org/jira/issues/?jql=project%20%3D%20%22Lucene%20-%20Core%22%20AND%20fixVersion%20%3D%20%22master%20(7.0)%22%20AND%20resolution%20%3D%20Unresolved%20AND%20priority%20%3D%20Blocker

Here are the ones that are unassigned:
https://issues.apache.org/jira/browse/SOLR-6630
https://issues.apache.org/jira/browse/SOLR-10887
https://issues.apache.org/jira/browse/SOLR-10803
https://issues.apache.org/jira/browse/SOLR-10756
https://issues.apache.org/jira/browse/SOLR-10710
https://issues.apache.org/jira/browse/SOLR-9321
https://issues.apache.org/jira/browse/SOLR-8256

The ones that are already assigned, I'd request you to update the JIRA so we 
can track it better.

In addition, I am about to create another one as I wasn’t able to extend 
SolrClient easily without a code duplication on master. 

This brings us to - 'when can we cut the branch'. I can create the branch this 
week and we can continue to work on these as long as none of these are 'new 
features' but I'd be happy to hear what everyone has to say. 

I know there were suggestions around a 6.7 release, does anyone who's 
interested in leading that have a timeline or an idea around what features did 
you want in that release? If yes, I’d really want to wait until at least the 
branch for 6.7 is cur for the purpose of easy back-compat management and 
guarantee.

Also, sorry for being on radio silence for the last few days. I’d been 
traveling but now I’m back :).

-Anshum Gupta
On Sun, Jun 18, 2017 at 8:57 AM Dennis Gove  wrote:

I've committed the most critical changes I wanted to make. Please don't hold up 
on a v7 release on my part.

Thanks!

Dennis

On Tue, Jun 13, 2017 at 9:27 AM, Dennis Gove  wrote:

Hi,

I also have some cleanup I'd like to do prior to a cut of 7. There are some new 
stream evaluators that I'm finding don't flow with the general flavor of 
evaluators. I'm using https://issues.apache.org/jira/browse/SOLR-10882 for the 
cleanup, but I do intend to be complete by June 16th.

Thanks,
Dennis


On Sat, Jun 10, 2017 at 11:21 AM, Ishan Chattopadhyaya 
 wrote:

Hi Anshum,
I would like to request you to consider delaying the branch cutting by a bit 
till we finalize the SOLR-10574 discussions and make the changes. 
Alternatively, we could backport the changes to that branch after you cut the 
branch now.
Regards,
Ishan

On Sat, Jun 3, 2017 at 1:02 AM, Steve Rowe  wrote:


> On Jun 2, 2017, at 5:40 PM, Shawn Heisey  wrote:
>
> On 6/2/2017 10:23 AM, Steve Rowe wrote:
>
>> I see zero benefits from cutting branch_7x now.  Shawn, can you describe why 
>> you think we should do this?
>>
>> My interpretation of your argument is that you’re in favor of delaying 
>> cutting branch_7_0 until feature freeze - which BTW is the status quo - but 
>> I don’t get why that argues for cutting branch_7x now.
>
> I think I read something in the message I replied to that wasn't
> actually stated.  I hate it when I don't read things closely enough.
>
> I meant to address the idea of making both branch_7x and branch_7_0 at
> the same time, whenever the branching happens.  Somehow I came up with
> the idea that the gist of the discussion included making the branches
> now, which I can see is not the case.
>
> My point, which I think applies equally to branch_7x, is to wait as long
> as practical before creating a branch, so that there is as little
> backporting as we can manage, particularly minimizing the amount of time
> that we have more than two branches being actively changed.

+1

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mai

[jira] [Updated] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode

2017-06-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-10962:
---
Attachment: SOLR-10962.patch

Thanks Ramsey for the changed patch!

Attaching slightly revised patch as follows:
* always read-and-log the commitReserveDuration (if it's read and logged but 
not otherwise used in slave mode then so be it?)
* flag _master.commitReserveDuration_ up as deprecated and throw exception if 
both the deprecated _master.commitReserveDuration_ and the top-level 
_commitReserveDuration_ are configured
* updated docs to reflect (in text and example) that 
_master.commitReserveDuration_ is deprecated in favour of 
_commitReserveDuration_
* adjusted existing-but-relocated commitReserveDuration description wording to 
avoid mention of 5Mb (anyone know how/where the 5Mb fits into the 
ReplicationHandler? couldn't find anything obvious from quick look around.)


> replicationHandler's reserveCommitDuration configurable in SolrCloud mode
> -
>
> Key: SOLR-10962
> URL: https://issues.apache.org/jira/browse/SOLR-10962
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Reporter: Ramsey Haddad
>Priority: Minor
> Attachments: SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch
>
>
> With SolrCloud mode, when doing replication via IndexFetcher, we occasionally 
> see the Fetch fail and then get restarted from scratch in cases where an 
> Index file is deleted after fetch manifest is computed and before the fetch 
> actually transfers the file. The risk of this happening can be reduced with a 
> higher value of reserveCommitDuration. However, the current configuration 
> only allows this value to be adjusted for "master" mode. This change allows 
> the value to also be changed when using "SolrCloud" mode.
> https://lucene.apache.org/solr/guide/6_6/index-replication.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7890) MemoryIndex should allow doc values iterator to be reset to the current docid

2017-06-28 Thread Martijn van Groningen (JIRA)
Martijn van Groningen created LUCENE-7890:
-

 Summary: MemoryIndex should allow doc values iterator to be reset 
to the current docid
 Key: LUCENE-7890
 URL: https://issues.apache.org/jira/browse/LUCENE-7890
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: master (7.0)
Reporter: Martijn van Groningen


The `SortedSetDocValues` and `SortedNumericDocValues` instances returned by the 
MemoryIndex should support subsequent `advanceExact(0)` invocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7x, and 7.0 branches

2017-06-28 Thread Adrien Grand
If you don't want to do it, I can do it tomorrow but if you'd like to give
it a try I'd be happy to help if you need any guidance.

Le mer. 28 juin 2017 à 19:38, Adrien Grand  a écrit :

> Hi Anshum,
>
> This looks like a good start to me. You would also need to remove the 6.x
> version constants so that TestBackwardCompatibility does not think they are
> worth testing, as well as all codecs, postings formats and doc values
> formats that are defined in the lucene/backward-codecs module since they
> are only about 6.x codecs.
>
> Le mer. 28 juin 2017 à 09:57, Anshum Gupta  a écrit :
>
>> Thanks for confirming that Alan, I had similar thoughts but wasn’t sure.
>>
>> I don’t want to change anything that I’m not confident about so I’m just
>> going to create remove those and commit it to my fork. If someone who’s
>> confident agrees with what I’m doing, I’ll go ahead and make those changes
>> to the upstream :).
>>
>> -Anshum
>>
>>
>>
>> On Jun 28, 2017, at 12:54 AM, Alan Woodward  wrote:
>>
>> We don’t need to support lucene5x codecs in 7, so you should be able to
>> just remove those tests (and the the relevant packages from
>> backwards-codecs too), I think?
>>
>>
>> On 28 Jun 2017, at 08:38, Anshum Gupta  wrote:
>>
>> I tried to move forward to see this work before automatically computing
>> the versions but I have about 30 odd failing test. I’ve made those changes
>> and pushed to my local GitHub account in case you have the time to look:
>> https://github.com/anshumg/lucene-solr
>>
>> Here’s the build summary if that helps:
>>
>>[junit4] Tests with failures [seed: 31C3B60E557C7E14] (first 10 out of
>> 31):
>>[junit4]   -
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testOutliers2
>>[junit4]   -
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testShortRange
>>[junit4]   -
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewValues
>>[junit4]   -
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFullLongRange
>>[junit4]   -
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testRamBytesUsed
>>[junit4]   -
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewLargeValues
>>[junit4]   -
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testByteRange
>>[junit4]   -
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testLongRange
>>[junit4]   -
>> org.apache.lucene.codecs.lucene50.TestLucene50SegmentInfoFormat.testRandomExceptions
>>[junit4]   -
>> org.apache.lucene.codecs.lucene62.TestLucene62SegmentInfoFormat.testRandomExceptions
>>[junit4]
>>[junit4]
>>[junit4] JVM J0: 0.56 .. 9.47 = 8.91s
>>[junit4] JVM J1: 0.56 .. 4.13 = 3.57s
>>[junit4] JVM J2: 0.56 ..47.28 =46.73s
>>[junit4] JVM J3: 0.56 .. 3.89 = 3.33s
>>[junit4] Execution time total: 47 seconds
>>[junit4] Tests summary: 8 suites, 215 tests, 30 errors, 1 failure, 24
>> ignored (24 assumptions)
>>
>>
>> -Anshum
>>
>>
>>
>> On Jun 27, 2017, at 4:15 AM, Adrien Grand  wrote:
>>
>> The test***BackwardCompatibility cases can be removed since they make
>> sure that Lucene 7 can read Lucene 6 norms, while Lucene 8 doesn't have to
>> be able to read Lucene 6 norms.
>>
>> TestSegmentInfos needs to be adapted to the new versions, we need to
>> replace 5 with 6 and 8 with 9. Maybe we should compute those numbers
>> automatically based on Version.LATEST.major so that it does not require
>> manual changes when moving to a new major version. That would give 5 ->
>> Version.LATEST.major-2 and 8 -> Version.LATEST.major+1.
>>
>> I can do those changes on Thursday if you don't feel comfortable doing
>> them.
>>
>>
>>
>> Le mar. 27 juin 2017 à 08:12, Anshum Gupta  a écrit :
>>
>>> Without making any changes at all and just bumping up the version, I hit
>>> these errors when running the tests:
>>>
>>>[junit4]   2> NOTE: reproduce with: ant test
>>> -Dtestcase=TestSegmentInfos -Dtests.method=testIllegalCreatedVersion
>>> -Dtests.seed=C818A61FA6C293A1 -Dtests.slow=true -Dtests.locale=es-PR
>>> -Dtests.timezone=Etc/GMT+4 -Dtests.asserts=true
>>> -Dtests.file.encoding=US-ASCII
>>>[junit4] FAILURE 0.01s J0 |
>>> TestSegmentInfos.testIllegalCreatedVersion <<<
>>>[junit4]> Throwable #1: junit.framework.AssertionFailedError:
>>> Expected exception IllegalArgumentException but no exception was thrown
>>>[junit4]> at
>>> __randomizedtesting.SeedInfo.seed([C818A61FA6C293A1:CE340683BE44C211]:0)
>>>[junit4]> at
>>> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2672)
>>>[junit4]> at
>>> org.apache.lucene.index.TestSegmentInfos.testIllegalCreatedVersion(TestSegmentInfos.java:35)
>>>[junit4]> at java.lang.Thread.run(Thread.java:748)
>>>[junit4]   2> NOTE: reproduce with: ant test
>>> -Dtestcase=TestSegmentInfos -Dtests.method=testVersionsOneSegment
>>> -Dtests.seed=C818A61FA6C293A1 -Dtests

Re: 7x, and 7.0 branches

2017-06-28 Thread Adrien Grand
Hi Anshum,

This looks like a good start to me. You would also need to remove the 6.x
version constants so that TestBackwardCompatibility does not think they are
worth testing, as well as all codecs, postings formats and doc values
formats that are defined in the lucene/backward-codecs module since they
are only about 6.x codecs.

Le mer. 28 juin 2017 à 09:57, Anshum Gupta  a écrit :

> Thanks for confirming that Alan, I had similar thoughts but wasn’t sure.
>
> I don’t want to change anything that I’m not confident about so I’m just
> going to create remove those and commit it to my fork. If someone who’s
> confident agrees with what I’m doing, I’ll go ahead and make those changes
> to the upstream :).
>
> -Anshum
>
>
>
> On Jun 28, 2017, at 12:54 AM, Alan Woodward  wrote:
>
> We don’t need to support lucene5x codecs in 7, so you should be able to
> just remove those tests (and the the relevant packages from
> backwards-codecs too), I think?
>
>
> On 28 Jun 2017, at 08:38, Anshum Gupta  wrote:
>
> I tried to move forward to see this work before automatically computing
> the versions but I have about 30 odd failing test. I’ve made those changes
> and pushed to my local GitHub account in case you have the time to look:
> https://github.com/anshumg/lucene-solr
>
> Here’s the build summary if that helps:
>
>[junit4] Tests with failures [seed: 31C3B60E557C7E14] (first 10 out of
> 31):
>[junit4]   -
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testOutliers2
>[junit4]   -
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testShortRange
>[junit4]   -
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewValues
>[junit4]   -
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFullLongRange
>[junit4]   -
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testRamBytesUsed
>[junit4]   -
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewLargeValues
>[junit4]   -
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testByteRange
>[junit4]   -
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testLongRange
>[junit4]   -
> org.apache.lucene.codecs.lucene50.TestLucene50SegmentInfoFormat.testRandomExceptions
>[junit4]   -
> org.apache.lucene.codecs.lucene62.TestLucene62SegmentInfoFormat.testRandomExceptions
>[junit4]
>[junit4]
>[junit4] JVM J0: 0.56 .. 9.47 = 8.91s
>[junit4] JVM J1: 0.56 .. 4.13 = 3.57s
>[junit4] JVM J2: 0.56 ..47.28 =46.73s
>[junit4] JVM J3: 0.56 .. 3.89 = 3.33s
>[junit4] Execution time total: 47 seconds
>[junit4] Tests summary: 8 suites, 215 tests, 30 errors, 1 failure, 24
> ignored (24 assumptions)
>
>
> -Anshum
>
>
>
> On Jun 27, 2017, at 4:15 AM, Adrien Grand  wrote:
>
> The test***BackwardCompatibility cases can be removed since they make sure
> that Lucene 7 can read Lucene 6 norms, while Lucene 8 doesn't have to be
> able to read Lucene 6 norms.
>
> TestSegmentInfos needs to be adapted to the new versions, we need to
> replace 5 with 6 and 8 with 9. Maybe we should compute those numbers
> automatically based on Version.LATEST.major so that it does not require
> manual changes when moving to a new major version. That would give 5 ->
> Version.LATEST.major-2 and 8 -> Version.LATEST.major+1.
>
> I can do those changes on Thursday if you don't feel comfortable doing
> them.
>
>
>
> Le mar. 27 juin 2017 à 08:12, Anshum Gupta  a écrit :
>
>> Without making any changes at all and just bumping up the version, I hit
>> these errors when running the tests:
>>
>>[junit4]   2> NOTE: reproduce with: ant test
>> -Dtestcase=TestSegmentInfos -Dtests.method=testIllegalCreatedVersion
>> -Dtests.seed=C818A61FA6C293A1 -Dtests.slow=true -Dtests.locale=es-PR
>> -Dtests.timezone=Etc/GMT+4 -Dtests.asserts=true
>> -Dtests.file.encoding=US-ASCII
>>[junit4] FAILURE 0.01s J0 | TestSegmentInfos.testIllegalCreatedVersion
>> <<<
>>[junit4]> Throwable #1: junit.framework.AssertionFailedError:
>> Expected exception IllegalArgumentException but no exception was thrown
>>[junit4]> at
>> __randomizedtesting.SeedInfo.seed([C818A61FA6C293A1:CE340683BE44C211]:0)
>>[junit4]> at
>> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2672)
>>[junit4]> at
>> org.apache.lucene.index.TestSegmentInfos.testIllegalCreatedVersion(TestSegmentInfos.java:35)
>>[junit4]> at java.lang.Thread.run(Thread.java:748)
>>[junit4]   2> NOTE: reproduce with: ant test
>> -Dtestcase=TestSegmentInfos -Dtests.method=testVersionsOneSegment
>> -Dtests.seed=C818A61FA6C293A1 -Dtests.slow=true -Dtests.locale=es-PR
>> -Dtests.timezone=Etc/GMT+4 -Dtests.asserts=true
>> -Dtests.file.encoding=US-ASCII
>>[junit4] ERROR   0.00s J0 | TestSegmentInfos.testVersionsOneSegment <<<
>>[junit4]> Throwable #1:
>> org.apache.lucene.index.CorruptIndexException: segments file recorded
>> inde

SOLR-629 fuzzy in edismax for 7.0?

2017-06-28 Thread Walter Underwood
I’m working on updating the 4.10.4 patch for SOLR-629 to work on master.

If anyone is familiar with the guts of the edismax query parser, I might need 
some help. It seems to take me a week to figure out that code every time I 
update this patch.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)




[jira] [Updated] (SOLR-10969) Backup API call failure leaves the backup directory undeleted

2017-06-28 Thread Rohit (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit updated SOLR-10969:
-
Attachment: 10969.patch

1. The patch automatically deletes the backup directory created with the Backup 
API in case of failure.

2. It provides a locationOverwrite parameter in the BackupAPI call. This will 
overwrite the existing directory in case the user wants to take backup without 
going to manually delete the backup directory. 
http://localhost:8983/solr/admin/collections?action=BACKUP&name=myBackupName&collection=test&location=/Users/Rohit/Documents/SolrInstall/backup&locationOverwrite=false

> Backup API call failure leaves the backup directory undeleted
> -
>
> Key: SOLR-10969
> URL: https://issues.apache.org/jira/browse/SOLR-10969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Affects Versions: 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>Priority: Minor
>  Labels: BACKUP, SolrCloud
> Attachments: 10969.patch
>
>
> 1. Invoke the Backup API on Solr cloud
> 2. Backup API fails
> 3. A directory is created in the location=/path/to/location with 
> name=myBackupname (as specified) in the backup API call.
> 4. In case of backup API failure the directory should be automatically 
> deleted since, when recalling the API the backup api will fail stating 
> directory already exists
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   "response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"doc1",
> "_version_":1570643322182041600},
>   {
> "id":"doc2",
> "_version_":1570643322185187328},
>   {
> "id":"doc3",
> "_version_":1570643322185187329},
>   {
> "id":"doc5",
> "_version_":1570643322188333056},
>   {
> "id":"doc6",
> "_version_":1570643322191478784},
>   {
> "id":"doc7",
> "_version_":1570643322191478785},
>   {
> "id":"doc8",
> "_version_":1570643322191478786},
>   {
> "id":"doc4",
> "_version_":157064332217998}]
>   }}
> 2. Try to create a backup of the collection with only 8 documents:
> http://localhost:8983/solr/admin/collections?action=BACKUP&name=myBackupName&collection=citibike&location=/Users/Rohit/Documents/SolrInstall/backup
> {
>   "responseHeader":{
> "status":500,
> "QTime":20},
>   "failure":{
> 
> "192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://192.168.3.15:8983/solr: Failed to backup 
> core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
> /Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not backup all replicas",
>   "exception":{
> "msg":"Could not backup all replicas",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Could not backup all replicas",
> "trace":"org.apache.solr.common.SolrException: Could not backup all 
> replicas\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.

[jira] [Updated] (SOLR-10969) Backup API call failure leaves the backup directory undeleted

2017-06-28 Thread Rohit (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit updated SOLR-10969:
-
Summary: Backup API call failure leaves the backup directory undeleted  
(was: Backup API fails leaved the backup directory undeleted on failure of 
backup API call)

> Backup API call failure leaves the backup directory undeleted
> -
>
> Key: SOLR-10969
> URL: https://issues.apache.org/jira/browse/SOLR-10969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Affects Versions: 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>Priority: Minor
>  Labels: BACKUP, SolrCloud
>
> 1. Invoke the Backup API on Solr cloud
> 2. Backup API fails
> 3. A directory is created in the location=/path/to/location with 
> name=myBackupname (as specified) in the backup API call.
> 4. In case of backup API failure the directory should be automatically 
> deleted since, when recalling the API the backup api will fail stating 
> directory already exists
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   "response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"doc1",
> "_version_":1570643322182041600},
>   {
> "id":"doc2",
> "_version_":1570643322185187328},
>   {
> "id":"doc3",
> "_version_":1570643322185187329},
>   {
> "id":"doc5",
> "_version_":1570643322188333056},
>   {
> "id":"doc6",
> "_version_":1570643322191478784},
>   {
> "id":"doc7",
> "_version_":1570643322191478785},
>   {
> "id":"doc8",
> "_version_":1570643322191478786},
>   {
> "id":"doc4",
> "_version_":157064332217998}]
>   }}
> 2. Try to create a backup of the collection with only 8 documents:
> http://localhost:8983/solr/admin/collections?action=BACKUP&name=myBackupName&collection=citibike&location=/Users/Rohit/Documents/SolrInstall/backup
> {
>   "responseHeader":{
> "status":500,
> "QTime":20},
>   "failure":{
> 
> "192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://192.168.3.15:8983/solr: Failed to backup 
> core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
> /Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not backup all replicas",
>   "exception":{
> "msg":"Could not backup all replicas",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Could not backup all replicas",
> "trace":"org.apache.solr.common.SolrException: Could not backup all 
> replicas\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.

RE: Release planning for 7.0

2017-06-28 Thread Uwe Schindler
Hi Anshum,

 

I have a häckidihickhäck workaround for the Hadoop Solr 9 issue. It is already 
committed to master and 6.x branch, so the issue is fixed:  
 
https://issues.apache.org/jira/browse/SOLR-10966

 

I lowered the Hadoop-Update (https://issues.apache.org/jira/browse/SOLR-10951) 
issue to “Major” level, so it is no longer blocker.

 

Nevertheless, we should fix the startup scripts for Java 9 in master before 
release of Solr 7, because currently the shell scripts fail (on certain 
platforms). And Java 9 is coming soon, so we should really have support because 
the speed improvements are a main reason to move to Java 9 with your Solr 
servers.

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: Anshum Gupta [mailto:ans...@anshumgupta.net] 
Sent: Sunday, June 25, 2017 7:52 PM
To: dev@lucene.apache.org
Subject: Re: Release planning for 7.0

 

Hi Uwe,

 

+1 on getting   SOLR-10951 in 
before the release but I assume you weren't hinting at holding back the branch 
creation :).

 

I am not well versed with that stuff so it would certainly be optimal for 
someone else to look at that.

 

-Anshum

On Sun, Jun 25, 2017 at 9:58 AM Uwe Schindler <  
u...@thetaphi.de> wrote:

Hi,

 

currently we have the following problem:

*   The first Java 9 release candidate came out. This one now uses the 
final version format. The string returned by ${java.version} is now plain 
simple “9” – bummer for one single 3rd party library!
*   This breaks one of the most basic Hadoop classes, so anything in Solr 
that refers somehow to Hadoop breaks. Of course this is HDFS - but also 
authentication! We should support Java 9, so we should really fix this ASAP!

 

>From now on all tests running with Java 9 fail on Jenkins until we fix the 
>following:

*   Get an Update from Hadoop Guys (2.7.4), with just the stupid check 
removed (the completely useless version checking code snipped already makes its 
round through twitter):   
https://issues.apache.org/jira/browse/HADOOP-14586
*   Or we update at least master/7.0 to latest Hadoop version, which has 
the bug already fixed. Unfortunately this does not work, as there is a bug in 
the Hadoop MiniDFSCluster that hangs on test shutdown. I have no idea how to 
fix. See   
https://issues.apache.org/jira/browse/SOLR-10951

 

I’d prefer to fix   
https://issues.apache.org/jira/browse/SOLR-10951 for master before release, so 
I set it as blocker. I am hoping for hely by Mark Miller. If the hadoop people 
have a simple bugfix release for the earlier version, we may also be able to 
fix branch_6x and branch_6_6 (but I disabled them on Jenkins anyways).

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de  

 

From: Anshum Gupta [mailto:  
ans...@anshumgupta.net] 
Sent: Saturday, June 24, 2017 10:52 PM


To:   dev@lucene.apache.org
Subject: Re: Release planning for 7.0

 

I'll create the 7x, and 7.0 branches tomorrow.

 

Ishan, do you mean you would be able to close it by Tuesday? You would have to 
commit to both 7.0, and 7.x, in addition to master, but I think that should be 
ok.

 

We also have SOLR-10803 open at this moment and we'd need to come to a decision 
on that as well in order to move forward with 7.0.

 

P.S: If there are any objections to this plan, kindly let me know.

 

-Anshum

 

On Fri, Jun 23, 2017 at 5:03 AM Ishan Chattopadhyaya < 
 ichattopadhy...@gmail.com> wrote:

Hi Anshum,



> I will send out an email a day before cutting the branch, as well as once the 
> branch is in place.

I'm right now on travel, and unable to finish SOLR-10574 until Monday (possibly 
Tuesday).

Regards,

Ishan

 

On Tue, Jun 20, 2017 at 5:08 PM, Anshum Gupta <  
ans...@anshumgupta.net> wrote:

>From my understanding, there's not really a 'plan' but some intention to 
>release a 6.7 at some time if enough people need it, right? In that case I 
>wouldn't hold back anything for a 6x line release and cut the 7x, and 7.0 
>branches around, but not before the coming weekend. I will send out an email a 
>day before cutting the branch, as well as once the branch is in place.

 

If anyone has any objections to that, do let me know.

 

Once that happens, we'd have a feature freeze on the 7.0 branch but we can take 
our time to iron out the bugs.

 

@Alan: Thanks for informing. I'll make sure that LUCENE-7877 is committed 
before I cut the branch. I have added the ri

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_131) - Build # 20003 - Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20003/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionAddReplica

Error Message:
Error from server at https://127.0.0.1:37879/solr: delete the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:37879/solr: delete the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([7D4BB7647B75CFAC:FD6BD24A6A36270A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:624)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:239)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:470)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:400)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1102)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:843)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:774)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:442)
at 
org.apache.solr.cloud.autoscaling.TestPolicyCloud.after(TestPolicyCloud.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:965)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAd

[jira] [Resolved] (SOLR-10907) suppress 2 Resource Leak warnings in ExpandComponent

2017-06-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-10907.

   Resolution: Fixed
Fix Version/s: master (7.0)

Resolving this issue. (Happy for alternative fixes to be explored separately 
later on if there is interest.)

> suppress 2 Resource Leak warnings in ExpandComponent
> 
>
> Key: SOLR-10907
> URL: https://issues.apache.org/jira/browse/SOLR-10907
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: SOLR-10907.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10506) Possible memory leak upon collection reload

2017-06-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-10506.

   Resolution: Fixed
Fix Version/s: master (7.0)

Thanks everyone!

> Possible memory leak upon collection reload
> ---
>
> Key: SOLR-10506
> URL: https://issues.apache.org/jira/browse/SOLR-10506
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.5
>Reporter: Torsten Bøgh Köster
>Assignee: Christine Poerschke
> Fix For: master (7.0)
>
> Attachments: SOLR-10506.patch, solr_collection_reload_13_cores.png, 
> solr_gc_path_via_zk_WatchManager.png
>
>
> Upon manual Solr Collection reloading, references to the closed {{SolrCore}} 
> are not fully removed by the garbage collector as a strong reference to the 
> {{ZkIndexSchemaReader}} is held in a ZooKeeper {{Watcher}} that watches for 
> schema changes.
> In our case, this leads to a massive memory leak as managed resources are 
> still referenced by the closed {{SolrCore}}. Our Solr cloud environment 
> utilizes rather large managed resources (synonyms, stopwords). To reproduce, 
> we fired out environment up and reloaded the collection 13 times. As a result 
> we fully exhausted our heap. A closer look with the Yourkit profiler revealed 
> 13 {{SolrCore}} instances, still holding strong references to the garbage 
> collection root (see screenshot 1).
> Each {{SolrCore}} instance holds a single path with strong references to the 
> gc root via a `Watcher` in `ZkIndexSchemaReader` (see screenshot 2). The 
> {{ZkIndexSchemaReader}} registers a close hook in the {{SolrCore}} but the 
> Zookeeper is not removed upon core close.
> We supplied a Github Pull Request 
> (https://github.com/apache/lucene-solr/pull/197) that extracts the zookeeper 
> `Watcher` as a static inner class. To eliminate the memory leak, the schema 
> reader is held inside a `WeakReference` and the reference is explicitly 
> removed on core close.
> Initially I wanted to supply a test case but unfortunately did not find a 
> good starting point ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10957) fix potential NPE in SolrCoreParser.init

2017-06-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-10957:
---
Attachment: SOLR-10957.patch

Attaching simpler alternative patch, getCore() returning null can be avoided by 
calling getSchema() instead.



Background/Context:

How might the 
[SolrCoreParser|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/SolrCoreParser.java]
 be used _without_ a SolrCore you might ask?
* SolrCoreParser extends the Lucene 
[CoreParser|https://github.com/apache/lucene-solr/blob/master/lucene/queryparser/src/java/org/apache/lucene/queryparser/xml/CoreParser.java]
 i.e. the core part of the name has nothing to do with solr cores.
* [Flax|http://www.flax.co.uk/]'s 
[Luwak|https://github.com/flaxsearch/luwak/blob/master/README.md] has a 
[MonitorQueryParser|https://github.com/flaxsearch/luwak/blob/master/luwak/src/main/java/uk/co/flax/luwak/MonitorQueryParser.java]
 interface and a 
[LuceneQueryParser|https://github.com/flaxsearch/luwak/blob/master/luwak/src/main/java/uk/co/flax/luwak/queryparsers/LuceneQueryParser.java]
 implementation of that interface.
* If a MonitorQueryParser implementation used the SolrCoreParser then it might 
do so _without_ a SolrCore (but with an IndexSchema and a SolrResourceLoader).


> fix potential NPE in SolrCoreParser.init
> 
>
> Key: SOLR-10957
> URL: https://issues.apache.org/jira/browse/SOLR-10957
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10957.patch, SOLR-10957.patch
>
>
> [SolrQueryRequestBase|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/request/SolrQueryRequestBase.java]
>  accommodates requests with a null SolrCore and this small change is for 
> SolrCoreParser.init to do likewise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-10353) TestSQLHandler reproducible failure: No match found for function signature min()

2017-06-28 Thread Joel Bernstein
I can add the assumeFalse for these locales.

Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, Jun 28, 2017 at 9:49 AM, Steve Rowe (JIRA)  wrote:

>
> [ https://issues.apache.org/jira/browse/SOLR-10353?page=
> com.atlassian.jira.plugin.system.issuetabpanels:comment-
> tabpanel&focusedCommentId=16066500#comment-16066500 ]
>
> Steve Rowe commented on SOLR-10353:
> ---
>
> I looked at all the Jenkins failures for this since this issue was opened,
> and all that I can see the locale for (i.e. notification email contains it
> or the Jenkins log is still available) have one of these locales: {{az}},
> {{az-AZ}}, {{az-Cyrl}}, {{az-Latn-AZ}}, {{az-Latn}}.
>
> The most recent example, from [https://jenkins.thetaphi.de/
> job/Lucene-Solr-master-Linux/19998], reproduces for me on Java8:
>
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler
> -Dtests.method=doTest -Dtests.seed=D3BEE760EAAD3B39 -Dtests.multiplier=3
> -Dtests.slow=true -Dtests.locale=az -Dtests.timezone=America/Grenada
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
>[junit4] ERROR   24.6s J2 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: -->
> https://127.0.0.1:37433/collection1_shard2_replica_n0:Failed to execute
> sqlQuery 'select str_s, count(*), sum(field_i), min(field_i), max(field_i),
> avg(field_i) from collection1 where text='' group by str_s order by
> sum(field_i) asc limit 2' against JDBC connection 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select str_s, count(*),
> sum(field_i), min(field_i), max(field_i), avg(field_i) from collection1
> where text='' group by str_s order by sum(field_i) asc limit 2": From
> line 1, column 39 to line 1, column 50: No match found for function
> signature min()
>[junit4]>at __randomizedtesting.SeedInfo.
> seed([D3BEE760EAAD3B39:74FA5FC487162880]:0)
>[junit4]>at org.apache.solr.client.solrj.
> io.stream.SolrStream.read(SolrStream.java:219)
>[junit4]>at org.apache.solr.handler.
> TestSQLHandler.getTuples(TestSQLHandler.java:2527)
>[junit4]>at org.apache.solr.handler.TestSQLHandler.
> testBasicGrouping(TestSQLHandler.java:676)
>[junit4]>at org.apache.solr.handler.TestSQLHandler.doTest(
> TestSQLHandler.java:90)
>[junit4]>at java.base/jdk.internal.reflect.
> NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>at java.base/jdk.internal.reflect.
> NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>at java.base/jdk.internal.reflect.
> DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>at java.base/java.lang.reflect.
> Method.invoke(Method.java:564)
>[junit4]>at org.apache.solr.BaseDistributedSearchTestCase$
> ShardsRepeatRule$ShardsFixedStatement.callStatement(
> BaseDistributedSearchTestCase.java:985)
>[junit4]>at org.apache.solr.BaseDistributedSearchTestCase$
> ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.
> java:960)
> {noformat}
>
> > TestSQLHandler reproducible failure: No match found for function
> signature min()
> > 
> -
> >
> > Key: SOLR-10353
> > URL: https://issues.apache.org/jira/browse/SOLR-10353
> > Project: Solr
> >  Issue Type: Bug
> >  Security Level: Public(Default Security Level. Issues are Public)
> >  Components: Parallel SQL
> >Reporter: Hoss Man
> >Assignee: Joel Bernstein
> >
> > found this while doing jdk9 testing, but the seed reproduces with jdk8
> as well...
> > {noformat}
> > hossman@tray:~/lucene/dev/solr/core [master] $ git rev-parse HEAD
> > c221ef0fdedaa92885746b3073150f0bd558f596
> > hossman@tray:~/lucene/dev/solr/core [master] $ ant test
> -Dtestcase=TestSQLHandler -Dtests.method=doTest
> -Dtests.seed=D778831206956D34 -Dtests.nightly=true -Dtests.slow=true
> -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=America/Cayman
> -Dtests.asserts=true -Dtests.file.encoding=ANSI_X3.4-1968
> > ...
> >[junit4]   2> NOTE: reproduce with: ant test
> -Dtestcase=TestSQLHandler -Dtests.method=doTest
> -Dtests.seed=D778831206956D34 -Dtests.nightly=true -Dtests.slow=true
> -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=America/Cayman
> -Dtests.asserts=true -Dtests.file.encoding=ANSI_X3.4-1968
> >[junit4] ERROR   28.0s | TestSQLHandler.doTest <<<
> >[junit4]> Throwable #1: java.io.IOException: -->
> http://127.0.0.1:37402/collection1:Failed to execute sqlQuery 'select
> str_s, count(*), sum(field_i), min(field_i), max(field_i), cast(avg(1.0 *
> field_i) as float) from collection1 where text='' group by str_s order
> by sum(field_i) asc limit 2' against JDBC connection 'jdbc:calcitesolr:'.
> >[junit

[jira] [Created] (SOLR-10969) Backup API fails leaved the backup directory undeleted on failure of backup API call

2017-06-28 Thread Rohit (JIRA)
Rohit created SOLR-10969:


 Summary: Backup API fails leaved the backup directory undeleted on 
failure of backup API call
 Key: SOLR-10969
 URL: https://issues.apache.org/jira/browse/SOLR-10969
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Backup/Restore, SolrCloud
Affects Versions: 6.6
 Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
OS: Mac OSX Sierra 
Processor: 2.6 GHz Intel Core i5 (64 bit)
RAM: 8 GB
Reporter: Rohit
Priority: Minor


1. Invoke the Backup API on Solr cloud
2. Backup API fails
3. A directory is created in the location=/path/to/location with 
name=myBackupname (as specified) in the backup API call.
4. In case of backup API failure the directory should be automatically deleted 
since, when recalling the API the backup api will fail stating directory 
already exists

Solr 6.6.0 (fresh install, 4 node solr cluster):
1. Create a collection in Solr called citibike:
http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr

2. Index 8 documents to Solr collection citibike:
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":10,
"params":{
  "q":"*:*",
  "indent":"on",
  "wt":"json"}},
  "response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
  {
"id":"doc1",
"_version_":1570643322182041600},
  {
"id":"doc2",
"_version_":1570643322185187328},
  {
"id":"doc3",
"_version_":1570643322185187329},
  {
"id":"doc5",
"_version_":1570643322188333056},
  {
"id":"doc6",
"_version_":1570643322191478784},
  {
"id":"doc7",
"_version_":1570643322191478785},
  {
"id":"doc8",
"_version_":1570643322191478786},
  {
"id":"doc4",
"_version_":157064332217998}]
  }}


2. Try to create a backup of the collection with only 8 documents:
http://localhost:8983/solr/admin/collections?action=BACKUP&name=myBackupName&collection=citibike&location=/Users/Rohit/Documents/SolrInstall/backup

{
  "responseHeader":{
"status":500,
"QTime":20},
  "failure":{

"192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
 from server at http://192.168.3.15:8983/solr: Failed to backup 
core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
/Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
  "Operation backup caused 
exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
 Could not backup all replicas",
  "exception":{
"msg":"Could not backup all replicas",
"rspCode":500},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"Could not backup all replicas",
"trace":"org.apache.solr.common.SolrException: Could not backup all 
replicas\n\tat 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
 org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.ha

[jira] [Comment Edited] (SOLR-10968) Collection Backup API call fails with exception

2017-06-28 Thread Rohit (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066766#comment-16066766
 ] 

Rohit edited comment on SOLR-10968 at 6/28/17 4:00 PM:
---

The idea to fix this bug would be to invoke the commit for all the cores of the 
collection for which backup api has been invoked. This patch would call commit 
before proceeding with backup process.


was (Author: rohitcse):
Patch will call commit for all the cores of the collection

> Collection Backup API call fails with exception
> ---
>
> Key: SOLR-10968
> URL: https://issues.apache.org/jira/browse/SOLR-10968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>  Labels: Backup, Solr_Cloud
> Attachments: 10968.patch
>
>
> Backup API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-backup)
>  fails with exception: 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not backup all replicas"
> Steps to reproduce the issue
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> {color:#14892c}http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr{color}
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   
> {color:#14892c}"response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[{color}
>   {
> "id":"doc1",
> "_version_":1570643322182041600},
>   {
> "id":"doc2",
> "_version_":1570643322185187328},
>   {
> "id":"doc3",
> "_version_":1570643322185187329},
>   {
> "id":"doc5",
> "_version_":1570643322188333056},
>   {
> "id":"doc6",
> "_version_":1570643322191478784},
>   {
> "id":"doc7",
> "_version_":1570643322191478785},
>   {
> "id":"doc8",
> "_version_":1570643322191478786},
>   {
> "id":"doc4",
> "_version_":157064332217998}]
>   }}
> 2. Try to create a backup of the collection with only 8 documents:
> {
>   "responseHeader":{
> "status":500,
> "QTime":20},
> {color:#14892c}  "failure":{
> 
> "192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://192.168.3.15:8983/solr: Failed to backup 
> core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
> /Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not backup all replicas",
>   "exception":{
> "msg":"Could not backup all replicas",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Could not backup all replicas",
> "trace":"org.apache.solr.common.SolrException: Could not backup all 
> replicas\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclips

[jira] [Updated] (SOLR-10968) Collection Backup API call fails with exception

2017-06-28 Thread Rohit (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit updated SOLR-10968:
-
Priority: Minor  (was: Major)

> Collection Backup API call fails with exception
> ---
>
> Key: SOLR-10968
> URL: https://issues.apache.org/jira/browse/SOLR-10968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>Priority: Minor
>  Labels: Backup, Solr_Cloud
> Attachments: 10968.patch
>
>
> Backup API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-backup)
>  fails with exception: 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not backup all replicas"
> Steps to reproduce the issue
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> {color:#14892c}http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr{color}
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   
> {color:#14892c}"response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[{color}
>   {
> "id":"doc1",
> "_version_":1570643322182041600},
>   {
> "id":"doc2",
> "_version_":1570643322185187328},
>   {
> "id":"doc3",
> "_version_":1570643322185187329},
>   {
> "id":"doc5",
> "_version_":1570643322188333056},
>   {
> "id":"doc6",
> "_version_":1570643322191478784},
>   {
> "id":"doc7",
> "_version_":1570643322191478785},
>   {
> "id":"doc8",
> "_version_":1570643322191478786},
>   {
> "id":"doc4",
> "_version_":157064332217998}]
>   }}
> 2. Try to create a backup of the collection with only 8 documents:
> {
>   "responseHeader":{
> "status":500,
> "QTime":20},
> {color:#14892c}  "failure":{
> 
> "192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://192.168.3.15:8983/solr: Failed to backup 
> core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
> /Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not backup all replicas",
>   "exception":{
> "msg":"Could not backup all replicas",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Could not backup all replicas",
> "trace":"org.apache.solr.common.SolrException: Could not backup all 
> replicas\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandle

[jira] [Commented] (SOLR-10826) CloudSolrClient using unsplit collection list when expanding aliases

2017-06-28 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066772#comment-16066772
 ] 

Varun Thacker commented on SOLR-10826:
--

Hi Tim,

Patch looks good to me and {{CloudSolrClient#testAliasHandling}} seems like a 
good place for the test.



bq. E.g. suppose you made a request with &collection=x,y where either or both 
of x and y are not real collection names but valid aliases.

Can we add another collection ( the test case already creates a 3 node cluster 
so creating another collection and indexing a couple of docs should be cheap ) 
and test this specific thing as well. Should be a few more asserts.

What do you think? I'd be more happy to have that test and get it committed soon

> CloudSolrClient using unsplit collection list when expanding aliases
> 
>
> Key: SOLR-10826
> URL: https://issues.apache.org/jira/browse/SOLR-10826
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4, 6.5.1, 6.6
>Reporter: Tim Owen
>Assignee: Varun Thacker
> Attachments: SOLR-10826.patch, SOLR-10826.patch
>
>
> Some recent refactoring seems to have introduced a bug in SolrJ's 
> CloudSolrClient, when it's expanding a collection list and resolving aliases, 
> it's using the wrong local variable for the alias lookup. This leads to an 
> exception because the value is not an alias.
> E.g. suppose you made a request with {{&collection=x,y}} where either or both 
> of {{x}} and {{y}} are not real collection names but valid aliases. This will 
> fail, incorrectly, because the lookup is using {{x,y}} as a potential alias 
> name lookup.
> Patch to fix this attached, which was tested locally and fixed the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10968) Collection Backup API call fails with exception

2017-06-28 Thread Rohit (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit updated SOLR-10968:
-
Attachment: 10968.patch

Patch will call commit for all the cores of the collection

> Collection Backup API call fails with exception
> ---
>
> Key: SOLR-10968
> URL: https://issues.apache.org/jira/browse/SOLR-10968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
> Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
> OS: Mac OSX Sierra 
> Processor: 2.6 GHz Intel Core i5 (64 bit)
> RAM: 8 GB
>Reporter: Rohit
>  Labels: Backup, Solr_Cloud
> Attachments: 10968.patch
>
>
> Backup API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-backup)
>  fails with exception: 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not backup all replicas"
> Steps to reproduce the issue
> Solr 6.6.0 (fresh install, 4 node solr cluster):
> 1. Create a collection in Solr called citibike:
> {color:#14892c}http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr{color}
> 2. Index 8 documents to Solr collection citibike:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":10,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json"}},
>   
> {color:#14892c}"response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[{color}
>   {
> "id":"doc1",
> "_version_":1570643322182041600},
>   {
> "id":"doc2",
> "_version_":1570643322185187328},
>   {
> "id":"doc3",
> "_version_":1570643322185187329},
>   {
> "id":"doc5",
> "_version_":1570643322188333056},
>   {
> "id":"doc6",
> "_version_":1570643322191478784},
>   {
> "id":"doc7",
> "_version_":1570643322191478785},
>   {
> "id":"doc8",
> "_version_":1570643322191478786},
>   {
> "id":"doc4",
> "_version_":157064332217998}]
>   }}
> 2. Try to create a backup of the collection with only 8 documents:
> {
>   "responseHeader":{
> "status":500,
> "QTime":20},
> {color:#14892c}  "failure":{
> 
> "192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://192.168.3.15:8983/solr: Failed to backup 
> core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
> /Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not backup all replicas",
>   "exception":{
> "msg":"Could not backup all replicas",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Could not backup all replicas",
> "trace":"org.apache.solr.common.SolrException: Could not backup all 
> replicas\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHand

Re: [lucene-solr] Git Push Summary [forced push!] [Forced Update!]

2017-06-28 Thread Erick Erickson
Excellent, thanks Steve! I thoroughly approve of it being impossible
for me to mess up. Or at least much more difficult than pressing a
button or following an on-screen prompt ;).

Erick

On Wed, Jun 28, 2017 at 8:46 AM, Steve Rowe  wrote:
> I chatted with Daniel Takamori (@pono) on ASF Infra’s hipchat channel:
>
> -
> [11:32 AM] Steve Rowe: Hi, in INFRA-13613 the Lucene PMC requested that 
> forced pushes be disabled for the lucene-solr git repo, but one of our 
> committers just performed one: refs/heads/feature/autoscaling e83404fa6 -> 
> 39c6fb2e3 (forced update).  I tried to comment on INFRA-13613, but I can't 
> comment or reopen.  Should I open a new issue?
> [11:33 AM] Daniel Takamori (pono): @SteveRoweGuest2 Let me check on that
> [11:35 AM] Daniel Takamori (pono): @SteveRoweGuest2 refs/heads/feature/ isn't 
> protected
> [11:35 AM] Daniel Takamori (pono): do you want force push disabled for the 
> whole repo?
> [11:35 AM] Steve Rowe: yes
> [11:36 AM] Daniel Takamori (pono): Ahh okay.
> [11:42 AM] Daniel Takamori (pono): @SteveRoweGuest2 should be good now
> [11:42 AM] Steve Rowe: @pono: thank you!
> -
>
> --
> Steve
> www.lucidworks.com
>
>> On Jun 28, 2017, at 11:22 AM, Steve Rowe  wrote:
>>
>> I thought we disabled forced pushes???  I’ll go ask on INFRA-13613.
>>
>> --
>> Steve
>> www.lucidworks.com
>>
>>> On Jun 28, 2017, at 10:54 AM, da...@apache.org wrote:
>>>
>>> Repository: lucene-solr
>>> Updated Branches:
>>> refs/heads/feature/autoscaling e83404fa6 -> 39c6fb2e3 (forced update)
>>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10968) Collection Backup API call fails with exception

2017-06-28 Thread Rohit (JIRA)
Rohit created SOLR-10968:


 Summary: Collection Backup API call fails with exception
 Key: SOLR-10968
 URL: https://issues.apache.org/jira/browse/SOLR-10968
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.6
 Environment: Tested on Fedora 24 64-bit (Linux), 8 GB RAM, 2 CPU and 
OS: Mac OSX Sierra 
Processor: 2.6 GHz Intel Core i5 (64 bit)
RAM: 8 GB

Reporter: Rohit


Backup API 
(https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-backup)
 fails with exception: 
"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
Could not backup all replicas"

Steps to reproduce the issue

Solr 6.6.0 (fresh install, 4 node solr cluster):

1. Create a collection in Solr called citibike:
{color:#14892c}http://localhost:8983/solr/admin/collections?action=CREATE&name=citibike&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=rohit&&createNodeSet=192.168.3.15:7574_solr,192.168.3.15:8983_solr{color}

2. Index 8 documents to Solr collection citibike:
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":10,
"params":{
  "q":"*:*",
  "indent":"on",
  "wt":"json"}},
  
{color:#14892c}"response":{"numFound":8,"start":0,"maxScore":1.0,"docs":[{color}
  {
"id":"doc1",
"_version_":1570643322182041600},
  {
"id":"doc2",
"_version_":1570643322185187328},
  {
"id":"doc3",
"_version_":1570643322185187329},
  {
"id":"doc5",
"_version_":1570643322188333056},
  {
"id":"doc6",
"_version_":1570643322191478784},
  {
"id":"doc7",
"_version_":1570643322191478785},
  {
"id":"doc8",
"_version_":1570643322191478786},
  {
"id":"doc4",
"_version_":157064332217998}]
  }}


2. Try to create a backup of the collection with only 8 documents:
{
  "responseHeader":{
"status":500,
"QTime":20},
{color:#14892c}  "failure":{

"192.168.3.15:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
 from server at http://192.168.3.15:8983/solr: Failed to backup 
core=citibike_shard2_replica1 because java.nio.file.NoSuchFileException: 
/Users/Rohit/Documents/SolrInstall/solr-6.6.0/example/cloud/node1/solr/citibike_shard2_replica1/data/index/segments_8"},
  "Operation backup caused 
exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
 Could not backup all replicas",
  "exception":{
"msg":"Could not backup all replicas",
"rspCode":500},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"Could not backup all replicas",
"trace":"org.apache.solr.common.SolrException: Could not backup all 
replicas\n\tat 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
 org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
 
org.eclipse.jetty.

[jira] [Issue Comment Deleted] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-10951:
-
Comment: was deleted

(was: Commit 02caab4ce1601281a3827b70950ed214de7e in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=02caab4 ]

Revert "SOLR-10951, HADOOP-14586: Add a hack to make Hadoop's Shell work with 
Java 9 release"

This reverts commit 1e93367c00bc48905ec66754e0143b82b8cdec55.
)

> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org.ap

[jira] [Issue Comment Deleted] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-10951:
-
Comment: was deleted

(was: Commit 0c369464d5194964353d5bda7a5d72c4a0594d44 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c36946 ]

Revert "SOLR-10951, HADOOP-14586: Add a hack to make Hadoop's Shell work with 
Java 9 release"

This reverts commit 1e93367c00bc48905ec66754e0143b82b8cdec55.
)

> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org

[jira] [Issue Comment Deleted] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-10951:
-
Comment: was deleted

(was: Commit c30b776efd7ef3156a7119b0f35dad000394dbbb in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c30b776 ]

SOLR-10951, HADOOP-14586: Add a hack to make Hadoop's Shell work with Java 9 
release
)

> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner

[jira] [Issue Comment Deleted] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-10951:
-
Comment: was deleted

(was: Commit 1e93367c00bc48905ec66754e0143b82b8cdec55 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1e93367 ]

SOLR-10951, HADOOP-14586: Add a hack to make Hadoop's Shell work with Java 9 
release
)

> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.ja

[jira] [Commented] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066756#comment-16066756
 ] 

Uwe Schindler commented on SOLR-10951:
--

Sorry for the commit/revert traffic. I used the wrong commit message (wrong 
issue number).

> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:367)
>[junit4]   2>  at 
> org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:382)
>[junit4]   2>

[jira] [Resolved] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved SOLR-10966.
--
Resolution: Fixed

> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch, SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066752#comment-16066752
 ] 

Uwe Schindler commented on SOLR-10966:
--

Please reopen if we release 6.6.1

> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch, SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066746#comment-16066746
 ] 

ASF subversion and git services commented on SOLR-10951:


Commit 0c369464d5194964353d5bda7a5d72c4a0594d44 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c36946 ]

Revert "SOLR-10951, HADOOP-14586: Add a hack to make Hadoop's Shell work with 
Java 9 release"

This reverts commit 1e93367c00bc48905ec66754e0143b82b8cdec55.


> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.star

[jira] [Commented] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066747#comment-16066747
 ] 

ASF subversion and git services commented on SOLR-10966:


Commit 8c7dd72c9a61cbbb81bd5a8bd61b3cd896e89ca0 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8c7dd72 ]

SOLR-10966, HADOOP-14586: Add workaround for Hadoop-Common 2.7.2 
incompatibility with Java 9


> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch, SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode

2017-06-28 Thread Ramsey Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066745#comment-16066745
 ] 

Ramsey Haddad edited comment on SOLR-10962 at 6/28/17 3:48 PM:
---

While I was initially trying to mimic the old structure, I agree that it is 
better to move to what Christine suggests.

Here is the fixed patch.



was (Author: rwhaddad):
While I was initially trying to mimic the old structure, I agree that is better 
to move to what Christine suggests.

Here is the fixed patch.


> replicationHandler's reserveCommitDuration configurable in SolrCloud mode
> -
>
> Key: SOLR-10962
> URL: https://issues.apache.org/jira/browse/SOLR-10962
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Reporter: Ramsey Haddad
>Priority: Minor
> Attachments: SOLR-10962.patch, SOLR-10962.patch
>
>
> With SolrCloud mode, when doing replication via IndexFetcher, we occasionally 
> see the Fetch fail and then get restarted from scratch in cases where an 
> Index file is deleted after fetch manifest is computed and before the fetch 
> actually transfers the file. The risk of this happening can be reduced with a 
> higher value of reserveCommitDuration. However, the current configuration 
> only allows this value to be adjusted for "master" mode. This change allows 
> the value to also be changed when using "SolrCloud" mode.
> https://lucene.apache.org/solr/guide/6_6/index-replication.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode

2017-06-28 Thread Ramsey Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramsey Haddad updated SOLR-10962:
-
Attachment: SOLR-10962.patch

While I was initially trying to mimic the old structure, I agree that is better 
to move to what Christine suggests.

Here is the fixed patch.


> replicationHandler's reserveCommitDuration configurable in SolrCloud mode
> -
>
> Key: SOLR-10962
> URL: https://issues.apache.org/jira/browse/SOLR-10962
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Reporter: Ramsey Haddad
>Priority: Minor
> Attachments: SOLR-10962.patch, SOLR-10962.patch
>
>
> With SolrCloud mode, when doing replication via IndexFetcher, we occasionally 
> see the Fetch fail and then get restarted from scratch in cases where an 
> Index file is deleted after fetch manifest is computed and before the fetch 
> actually transfers the file. The risk of this happening can be reduced with a 
> higher value of reserveCommitDuration. However, the current configuration 
> only allows this value to be adjusted for "master" mode. This change allows 
> the value to also be changed when using "SolrCloud" mode.
> https://lucene.apache.org/solr/guide/6_6/index-replication.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066743#comment-16066743
 ] 

ASF subversion and git services commented on SOLR-10966:


Commit 85a27a231fdddb118ee178baac170da0097a02c0 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=85a27a2 ]

SOLR-10966, HADOOP-14586: Add workaround for Hadoop-Common 2.7.2 
incompatibility with Java 9


> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch, SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066742#comment-16066742
 ] 

ASF subversion and git services commented on SOLR-10951:


Commit 02caab4ce1601281a3827b70950ed214de7e in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=02caab4 ]

Revert "SOLR-10951, HADOOP-14586: Add a hack to make Hadoop's Shell work with 
Java 9 release"

This reverts commit 1e93367c00bc48905ec66754e0143b82b8cdec55.


> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(J

[jira] [Commented] (SOLR-10826) CloudSolrClient using unsplit collection list when expanding aliases

2017-06-28 Thread Tim Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066740#comment-16066740
 ] 

Tim Owen commented on SOLR-10826:
-

Updated patch with some extra assertions. Without the code fix, those extra 
lines fail the test as expected, but pass with the fix.

I had a look at the AliasIntegrationTest but it essential does the same kind of 
thing the CloudSolrClientTest is doing.

> CloudSolrClient using unsplit collection list when expanding aliases
> 
>
> Key: SOLR-10826
> URL: https://issues.apache.org/jira/browse/SOLR-10826
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4, 6.5.1, 6.6
>Reporter: Tim Owen
>Assignee: Varun Thacker
> Attachments: SOLR-10826.patch, SOLR-10826.patch
>
>
> Some recent refactoring seems to have introduced a bug in SolrJ's 
> CloudSolrClient, when it's expanding a collection list and resolving aliases, 
> it's using the wrong local variable for the alias lookup. This leads to an 
> exception because the value is not an alias.
> E.g. suppose you made a request with {{&collection=x,y}} where either or both 
> of {{x}} and {{y}} are not real collection names but valid aliases. This will 
> fail, incorrectly, because the lookup is using {{x,y}} as a potential alias 
> name lookup.
> Patch to fix this attached, which was tested locally and fixed the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [lucene-solr] Git Push Summary [forced push!] [Forced Update!]

2017-06-28 Thread Steve Rowe
I chatted with Daniel Takamori (@pono) on ASF Infra’s hipchat channel:

-
[11:32 AM] Steve Rowe: Hi, in INFRA-13613 the Lucene PMC requested that forced 
pushes be disabled for the lucene-solr git repo, but one of our committers just 
performed one: refs/heads/feature/autoscaling e83404fa6 -> 39c6fb2e3 (forced 
update).  I tried to comment on INFRA-13613, but I can't comment or reopen.  
Should I open a new issue?
[11:33 AM] Daniel Takamori (pono): @SteveRoweGuest2 Let me check on that
[11:35 AM] Daniel Takamori (pono): @SteveRoweGuest2 refs/heads/feature/ isn't 
protected
[11:35 AM] Daniel Takamori (pono): do you want force push disabled for the 
whole repo?
[11:35 AM] Steve Rowe: yes
[11:36 AM] Daniel Takamori (pono): Ahh okay.
[11:42 AM] Daniel Takamori (pono): @SteveRoweGuest2 should be good now
[11:42 AM] Steve Rowe: @pono: thank you!
-

--
Steve
www.lucidworks.com

> On Jun 28, 2017, at 11:22 AM, Steve Rowe  wrote:
> 
> I thought we disabled forced pushes???  I’ll go ask on INFRA-13613.
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Jun 28, 2017, at 10:54 AM, da...@apache.org wrote:
>> 
>> Repository: lucene-solr
>> Updated Branches:
>> refs/heads/feature/autoscaling e83404fa6 -> 39c6fb2e3 (forced update)
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10826) CloudSolrClient using unsplit collection list when expanding aliases

2017-06-28 Thread Tim Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-10826:

Attachment: SOLR-10826.patch

> CloudSolrClient using unsplit collection list when expanding aliases
> 
>
> Key: SOLR-10826
> URL: https://issues.apache.org/jira/browse/SOLR-10826
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4, 6.5.1, 6.6
>Reporter: Tim Owen
>Assignee: Varun Thacker
> Attachments: SOLR-10826.patch, SOLR-10826.patch
>
>
> Some recent refactoring seems to have introduced a bug in SolrJ's 
> CloudSolrClient, when it's expanding a collection list and resolving aliases, 
> it's using the wrong local variable for the alias lookup. This leads to an 
> exception because the value is not an alias.
> E.g. suppose you made a request with {{&collection=x,y}} where either or both 
> of {{x}} and {{y}} are not real collection names but valid aliases. This will 
> fail, incorrectly, because the lookup is using {{x,y}} as a potential alias 
> name lookup.
> Patch to fix this attached, which was tested locally and fixed the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066733#comment-16066733
 ] 

ASF subversion and git services commented on SOLR-10951:


Commit c30b776efd7ef3156a7119b0f35dad000394dbbb in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c30b776 ]

SOLR-10951, HADOOP-14586: Add a hack to make Hadoop's Shell work with Java 9 
release


> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org.apache.sol

[jira] [Commented] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066732#comment-16066732
 ] 

ASF subversion and git services commented on SOLR-10951:


Commit 5998ebb25056fdf134a591f1230c8bff15b55f0a in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5998ebb ]

Revert "SOLR-10951: Hadoop does not work on Java 9, disable tests that break"

This reverts commit e43253312f965ba838d80c2000dee761df1f25f5.


> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner

[jira] [Commented] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066729#comment-16066729
 ] 

ASF subversion and git services commented on SOLR-10951:


Commit 1e93367c00bc48905ec66754e0143b82b8cdec55 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1e93367 ]

SOLR-10951, HADOOP-14586: Add a hack to make Hadoop's Shell work with Java 9 
release


> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org.apache.solr.c

[jira] [Commented] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066719#comment-16066719
 ] 

ASF subversion and git services commented on SOLR-10951:


Commit cdc2cc5afa669dd8c97907eff8f9efe6d89ca74b in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cdc2cc5 ]

Revert "SOLR-10951: Hadoop does not work on Java 9, disable tests that break"

This reverts commit e43253312f965ba838d80c2000dee761df1f25f5.


> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.ja

[jira] [Updated] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-10951:
-
Priority: Major  (was: Blocker)

> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:367)
>[junit4]   2>  at 
> org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:382)
>[junit4]   2>  at 
> org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:245)
>[junit4]  

[jira] [Updated] (SOLR-10004) javadoc test in smokeTestRelease.py wants to fail

2017-06-28 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10004:

Fix Version/s: 6.6.1

Thanks Steve.

> javadoc test in smokeTestRelease.py wants to fail
> -
>
> Key: SOLR-10004
> URL: https://issues.apache.org/jira/browse/SOLR-10004
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Mike Drob
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: javadoc_results, missing-descriptions.txt, Screenshot 
> from 2017-05-25 17-29-23.png
>
>
> When running smoke test for 6.4, I got a lot of noise about missing content 
> related to javadocs.
> Attaching a partial output.
> We should either fix the check so that this isn't so verbose with failures we 
> ignore, or fix the failures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10004) javadoc test in smokeTestRelease.py wants to fail

2017-06-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066702#comment-16066702
 ] 

Steve Rowe commented on SOLR-10004:
---

bq.  I've added 6.7 (and master). Steve Rowe, any thoughts on whether this 
should get the fix version 6.6 instead?

Since it was committed to branch_6_6 after the release, it should get fix 
version 6.6.1.   Since it was also committed on branch_6x, it should keep the 
6.7 fix version.

> javadoc test in smokeTestRelease.py wants to fail
> -
>
> Key: SOLR-10004
> URL: https://issues.apache.org/jira/browse/SOLR-10004
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Mike Drob
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0), 6.7
>
> Attachments: javadoc_results, missing-descriptions.txt, Screenshot 
> from 2017-05-25 17-29-23.png
>
>
> When running smoke test for 6.4, I got a lot of noise about missing content 
> related to javadocs.
> Attaching a partial output.
> We should either fix the check so that this isn't so verbose with failures we 
> ignore, or fix the failures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10967) Cleanup the default configset

2017-06-28 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-10967:
-
Affects Version/s: master (7.0)

> Cleanup the default configset
> -
>
> Key: SOLR-10967
> URL: https://issues.apache.org/jira/browse/SOLR-10967
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>
> The schema in the default configset is 1000 lines . We should audit it and 
> see if we can prune it a little bit. 
> Also in this Jira we should fix some of the copy editing . For example, 
> comments like these are outdated 
> {code}
>  This is the Solr schema file. This file should be named "schema.xml" and
>  should be in the conf directory under the solr home
>  (i.e. ./solr/conf/schema.xml by default) 
>  or located where the classloader for the Solr webapp can find it.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10004) javadoc test in smokeTestRelease.py wants to fail

2017-06-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066690#comment-16066690
 ] 

Ishan Chattopadhyaya edited comment on SOLR-10004 at 6/28/17 3:23 PM:
--

It is complicated. It was committed to 6.6 branch, but it was after I released 
the RC (which passed the vote).
I've added 6.7 (and master). [~steve_rowe], any thoughts on whether this should 
get the fix version 6.6 instead?


was (Author: ichattopadhyaya):
It is complicated. It was committed to 6.6 branch, but it was after I released 
the RC (which passed the vote).
I've added 6.7 (and master). [~sarowe], any thoughts on whether this should get 
the fix version 6.6 instead?

> javadoc test in smokeTestRelease.py wants to fail
> -
>
> Key: SOLR-10004
> URL: https://issues.apache.org/jira/browse/SOLR-10004
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Mike Drob
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0), 6.7
>
> Attachments: javadoc_results, missing-descriptions.txt, Screenshot 
> from 2017-05-25 17-29-23.png
>
>
> When running smoke test for 6.4, I got a lot of noise about missing content 
> related to javadocs.
> Attaching a partial output.
> We should either fix the check so that this isn't so verbose with failures we 
> ignore, or fix the failures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10967) Cleanup the default configset

2017-06-28 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-10967:


Assignee: Varun Thacker

> Cleanup the default configset
> -
>
> Key: SOLR-10967
> URL: https://issues.apache.org/jira/browse/SOLR-10967
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>
> The schema in the default configset is 1000 lines . We should audit it and 
> see if we can prune it a little bit. 
> Also in this Jira we should fix some of the copy editing . For example, 
> comments like these are outdated 
> {code}
>  This is the Solr schema file. This file should be named "schema.xml" and
>  should be in the conf directory under the solr home
>  (i.e. ./solr/conf/schema.xml by default) 
>  or located where the classloader for the Solr webapp can find it.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10004) javadoc test in smokeTestRelease.py wants to fail

2017-06-28 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-10004.
-
   Resolution: Fixed
Fix Version/s: 6.7
   master (7.0)

It is complicated. It was committed to 6.6 branch, but it was after I released 
the RC (which passed the vote).
I've added 6.7 (and master). [~sarowe], any thoughts on whether this should get 
the fix version 6.6 instead?

> javadoc test in smokeTestRelease.py wants to fail
> -
>
> Key: SOLR-10004
> URL: https://issues.apache.org/jira/browse/SOLR-10004
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Mike Drob
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0), 6.7
>
> Attachments: javadoc_results, missing-descriptions.txt, Screenshot 
> from 2017-05-25 17-29-23.png
>
>
> When running smoke test for 6.4, I got a lot of noise about missing content 
> related to javadocs.
> Attaching a partial output.
> We should either fix the check so that this isn't so verbose with failures we 
> ignore, or fix the failures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [lucene-solr] Git Push Summary [forced push!] [Forced Update!]

2017-06-28 Thread Steve Rowe
I thought we disabled forced pushes???  I’ll go ask on INFRA-13613.

--
Steve
www.lucidworks.com

> On Jun 28, 2017, at 10:54 AM, da...@apache.org wrote:
> 
> Repository: lucene-solr
> Updated Branches:
>  refs/heads/feature/autoscaling e83404fa6 -> 39c6fb2e3 (forced update)


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10967) Cleanup the default configset

2017-06-28 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-10967:


 Summary: Cleanup the default configset
 Key: SOLR-10967
 URL: https://issues.apache.org/jira/browse/SOLR-10967
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


The schema in the default configset is 1000 lines . We should audit it and see 
if we can prune it a little bit. 

Also in this Jira we should fix some of the copy editing . For example, 
comments like these are outdated 

{code}
 This is the Solr schema file. This file should be named "schema.xml" and
 should be in the conf directory under the solr home
 (i.e. ./solr/conf/schema.xml by default) 
 or located where the classloader for the Solr webapp can find it.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-9-ea+173) - Build # 6686 - Still Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6686/
Java: 64bit/jdk-9-ea+173 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A4DC42509C6CBB97:C6B1BC1153E2DBA9]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at 
org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter(MetricsHandlerTest.java:201)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithDelegationTokens

Error Message:
SOLR-10951: Hadoop does not work on Java 9

Stack Trace:
com.carrotsearch.randomizedtesting.InternalAssumptionViolatedException: 
SOLR-10951: 

[jira] [Updated] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-10966:
-
Attachment: SOLR-10966.patch

Better patch that does not fail if some JVM disallows to update 
{{java.version}}.

> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch, SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10004) javadoc test in smokeTestRelease.py wants to fail

2017-06-28 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob reassigned SOLR-10004:


Assignee: Ishan Chattopadhyaya

[~ichattopadhyaya] - can you set the fix version for this and resolve the 
issue? I tried to reason through what got committed where, but there's a lot 
going on here and I got a bit lost. Thanks!

> javadoc test in smokeTestRelease.py wants to fail
> -
>
> Key: SOLR-10004
> URL: https://issues.apache.org/jira/browse/SOLR-10004
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Mike Drob
>Assignee: Ishan Chattopadhyaya
> Attachments: javadoc_results, missing-descriptions.txt, Screenshot 
> from 2017-05-25 17-29-23.png
>
>
> When running smoke test for 6.4, I got a lot of noise about missing content 
> related to javadocs.
> Attaching a partial output.
> We should either fix the check so that this isn't so verbose with failures we 
> ignore, or fix the failures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7889) Allow grouping on DoubleValuesSource ranges

2017-06-28 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7889:
--
Attachment: LUCENE-7889.patch

Here's a patch.  Still needs some javadocs.

The tests for this are pretty basic currently.  I'm going to try porting the 
randomized tests from TestGrouping to this, and make it easier to test new 
GroupSelectors, but I'll make that a separate issue.

> Allow grouping on DoubleValuesSource ranges
> ---
>
> Key: LUCENE-7889
> URL: https://issues.apache.org/jira/browse/LUCENE-7889
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: master (7.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7889.patch
>
>
> LUCENE-7701 made it easier to define new ways of grouping results.  This 
> issue adds functionality to group the values of a DoubleValuesSource into a 
> set of ranges.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7889) Allow grouping on DoubleValuesSource ranges

2017-06-28 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-7889:
-

 Summary: Allow grouping on DoubleValuesSource ranges
 Key: LUCENE-7889
 URL: https://issues.apache.org/jira/browse/LUCENE-7889
 Project: Lucene - Core
  Issue Type: New Feature
Affects Versions: master (7.0)
Reporter: Alan Woodward
Assignee: Alan Woodward


LUCENE-7701 made it easier to define new ways of grouping results.  This issue 
adds functionality to group the values of a DoubleValuesSource into a set of 
ranges.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066646#comment-16066646
 ] 

Uwe Schindler commented on SOLR-10966:
--

I tested this already on Jenkins, all passes on Java 9: Here is the running 
test: https://jenkins.thetaphi.de/job/Lucene-Solr-Hadoop-Update/19

> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-10966:
-
Attachment: SOLR-10966.patch

Hack. I will commit this soon to master and 6.x / 6.6, so I can enable testing 
again!

> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6807) Make handleSelect=false by default

2017-06-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-6807:
---
Attachment: SOLR_6807_handleSelect_false.patch

Thanks Jan.

I updated the main patch to mark StandardRequestHandler deprecated.  I updated 
every reference to it everywhere except CHANGES.txt of course, including some 
adoc pages like the one you mentioned.  I also fixed some test references to 
"standard" that was not actually being used.

I think it's ready.

> Make handleSelect=false by default
> --
>
> Key: SOLR-6807
> URL: https://issues.apache.org/jira/browse/SOLR-6807
> Project: Solr
>  Issue Type: Task
>Affects Versions: 4.10.2
>Reporter: Alexandre Rafalovitch
>Assignee: David Smiley
>Priority: Minor
>  Labels: solrconfig.xml
> Fix For: master (7.0)
>
> Attachments: SOLR_6807_handleSelect_false.patch, 
> SOLR_6807_handleSelect_false.patch, SOLR_6807_handleSelect_false.patch, 
> SOLR_6807_test_files.patch
>
>
> In the solrconfig.xml, we have a long explanation on the legacy 
> ** section. Since we are cleaning up 
> legacy stuff for version 5, is it safe now to flip handleSelect's default to 
> be *false* and therefore remove both the attribute and the whole section 
> explaining it?
> Then, a section in Reference Guide or even a blog post can explain what to do 
> for the old clients that still need it. But it does not seem to be needed 
> anymore for the new users. And possibly cause confusing now that we have 
> implicit, explicit and overlay handlers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-10966:
-
Description: 
I did some testing to work around HADOOP-14586 and found a temporary solution. 
All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).

This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951

The trick here is a hack: The Hadoop Shell class  tries to parse 
{{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
release candidate. It contains no dots and is shorter than 3 characters. Hadoop 
tries to get the {{substring(0,3)}} and fails with an IndexOutOfBoundsException 
in clinit. To work around this, we do the following on early Solr startup / 
test startup (in a static analyzer, like we do for logging initialization):

- set {{java.version}} system property to {{"1.9"}}
- initialize the Shell class in Hadoop
- restore the old value of {{java.version}}

The whole thing is done in a doPrivileged. I ran some tests on Policeman 
Jenkins, everything works. The hack is only done, if _we_ detect Java 9.

  was:
I did some testing to work around HADOOP-14586 and found a temporary solution. 
All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).

This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951

The trick here is a hack: The Hadoop Shell class  tries to parse 
{{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
release candidate. It contains no dots and is shorter than 3 characters. Hadoop 
tries to get the {{substring(0,3)}} and fails with an IndexOutOfBoundsException 
in clinit. To work around this, we do the following on early Solr startup / 
test startup (in a static analyzer, like we do for logging initialization):

- set {{java.version}} system property to {{"1.9"}}
- initialize the Shell class in Hadoop
- restore the old value of {{java.version}}

The whole thing is done in a doPrivileged. I ran some tests on Policeman 
Jenkins, everything works. The hack is only done, if _we_ detect Java 9 and we 
are not on Windows (because Windows fails anyways, so we don't try).


> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: master (7.0), 6.7, 6.6.1
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10951) Update Hadoop dependencies to 2.8.1, so Solr works with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066613#comment-16066613
 ] 

Uwe Schindler commented on SOLR-10951:
--

As this issue seems to take longer to solve (and is not easy to solve in 
Lucene/Solr 6.x), I have a hack available that makes Hadoop integration work 
with Solr 6 and 7 using the current Hadoop libraries: SOLR-10966

Once this issue is solved, we should revert the hack - of course!

> Update Hadoop dependencies to 2.8.1, so Solr works with Java 9
> --
>
> Key: SOLR-10951
> URL: https://issues.apache.org/jira/browse/SOLR-10951
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
>  Labels: Java9
> Fix For: master (7.0)
>
> Attachments: SOLR-10951.patch, SOLR-10951.patch
>
>
> See issue: HADOOP-14586
> Since Java 9 build 175 (the first Java 9 relaese candidate), Hadoop 
> integration fails in Java 9:
> {noformat}
>[junit4]   2> 129956 ERROR (jetty-launcher-232-thread-2) [] 
> o.a.s.c.SolrCore null:java.lang.ExceptionInInitializerError
>[junit4]   2>  at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>  at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:148)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:118)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:238)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:209)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.initializeAuthHandler(HadoopAuthFilter.java:120)
>[junit4]   2>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:175)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthFilter.init(HadoopAuthFilter.java:68)
>[junit4]   2>  at 
> org.apache.solr.security.HadoopAuthPlugin.init(HadoopAuthPlugin.java:142)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:360)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:684)
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:522)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:257)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:177)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1565)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1599)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1285)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1130)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
>[junit4]   2>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
>[junit4]   2>  at 
> org.ap

[jira] [Created] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-06-28 Thread Uwe Schindler (JIRA)
Uwe Schindler created SOLR-10966:


 Summary: Add workaround for Hadoop-Common 2.7.2 incompatibility 
with Java 9
 Key: SOLR-10966
 URL: https://issues.apache.org/jira/browse/SOLR-10966
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Hadoop Integration, hdfs
Affects Versions: 6.6
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Critical
 Fix For: master (7.0), 6.7, 6.6.1


I did some testing to work around HADOOP-14586 and found a temporary solution. 
All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).

This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951

The trick here is a hack: The Hadoop Shell class  tries to parse 
{{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
release candidate. It contains no dots and is shorter than 3 characters. Hadoop 
tries to get the {{substring(0,3)}} and fails with an IndexOutOfBoundsException 
in clinit. To work around this, we do the following on early Solr startup / 
test startup (in a static analyzer, like we do for logging initialization):

- set {{java.version}} system property to {{"1.9"}}
- initialize the Shell class in Hadoop
- restore the old value of {{java.version}}

The whole thing is done in a doPrivileged. I ran some tests on Policeman 
Jenkins, everything works. The hack is only done, if _we_ detect Java 9 and we 
are not on Windows (because Windows fails anyways, so we don't try).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066585#comment-16066585
 ] 

Mark Miller commented on SOLR-10783:


I think we may want to use a new JIRA issue if we want to make this plugable, 
but it kind of does seem like we should make this more object oriented in 
design to start.

Is there a way to rework this so that we have a HadoopSSLConfiguration subclass 
that is used rather than special casing SSLConfigurations for different 
providers?

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7888) TestMixedDocValuesUpdates.testManyReopensAndFields() failure

2017-06-28 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7888:
--

 Summary: TestMixedDocValuesUpdates.testManyReopensAndFields() 
failure
 Key: LUCENE-7888
 URL: https://issues.apache.org/jira/browse/LUCENE-7888
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe


Non-reproducing failure from 
[https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/794/]:

{noformat}
Checking out Revision e8057309b90db0c79fc273e2284948b84c16ce4c 
(refs/remotes/origin/master)
[...]
   [smoker][junit4] Suite: org.apache.lucene.index.TestMixedDocValuesUpdates
   [smoker][junit4] IGNOR/A 0.00s J0 | 
TestMixedDocValuesUpdates.testTonsOfUpdates
   [smoker][junit4]> Assumption #1: 'nightly' test group is disabled 
(@Nightly())
   [smoker][junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestMixedDocValuesUpdates -Dtests.method=testManyReopensAndFields 
-Dtests.seed=69A3133AC96F545A -Dtests.multiplier=2 -Dtests.locale=es-SV 
-Dtests.timezone=Europe/Zagreb -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [smoker][junit4] FAILURE 0.04s J0 | 
TestMixedDocValuesUpdates.testManyReopensAndFields <<<
   [smoker][junit4]> Throwable #1: java.lang.AssertionError: invalid 
numeric value for doc=0, field=f0, reader=_3(7.0.0):c35/1 expected:<3> but 
was:<2>
   [smoker][junit4]>at 
__randomizedtesting.SeedInfo.seed([69A3133AC96F545A:5F5F7115489A3746]:0)
   [smoker][junit4]>at 
org.apache.lucene.index.TestMixedDocValuesUpdates.testManyReopensAndFields(TestMixedDocValuesUpdates.java:138)
   [smoker][junit4]>at java.lang.Thread.run(Thread.java:748)
   [smoker][junit4]   2> NOTE: test params are: codec=Lucene70, 
sim=RandomSimilarity(queryNorm=true): {}, locale=es-SV, timezone=Europe/Zagreb
   [smoker][junit4]   2> NOTE: Linux 3.13.0-88-generic amd64/Oracle 
Corporation 1.8.0_131 (64-bit)/cpus=4,threads=1,free=177411120,total=287309824
   [smoker][junit4]   2> NOTE: All tests run in this JVM: 
[TestRegexpRandom, TestStandardAnalyzer, TestMmapDirectory, TestCodecs, 
TestDocValuesQueries, TestNeverDelete, TestIndexWriterConfig, 
TestNoDeletionPolicy, TestBooleanMinShouldMatch, TestIndexSorting, 
TestDocValuesIndexing, TestTragicIndexWriterDeadlock, TestIntBlockPool, 
TestBinaryTerms, TestIndexWriter, Test4GBStoredFields, TestSortedSetSelector, 
TestAllFilesCheckIndexHeader, TestFilterCodecReader, TestCachingCollector, 
TestNotDocIdSet, TestQueryBuilder, TestMaxTermFrequency, TestForceMergeForever, 
TestFieldMaskingSpanQuery, TestRegExp, TestPointValues, 
TestIndexWriterOutOfFileDescriptors, Test2BTerms, TestTermsEnum, 
TestSloppyPhraseQuery, TestBoostQuery, TestRateLimiter, 
TestIndexWriterExceptions, TestMultiPhraseQuery, TestSimpleSearchEquivalence, 
TestBinaryDocValuesUpdates, TestPerSegmentDeletes, Test2BPoints, 
TestSimpleExplanations, TestPerFieldPostingsFormat, 
TestLucene50TermVectorsFormat, TestSingleInstanceLockFactory, 
TestLucene50CompoundFormat, TestMaxPosition, TestTotalHitCountCollector, 
TestConstantScoreQuery, TestWordlistLoader, TestThreadedForceMerge, 
TestBytesRefArray, TestPointQueries, TestCharFilter, TestSimilarityProvider, 
TestBytesStore, TestIntroSorter, TestWildcardRandom, TestSimilarity, 
TestFieldValueQuery, TestOmitNorms, TestUnicodeUtil, TestLRUQueryCache, 
TestTermQuery, TestInPlaceMergeSorter, TestNot, TestTopFieldCollector, 
TestIndexWriterFromReader, TestCharArrayMap, TestUTF32ToUTF8, TestDocIdsWriter, 
TestDocsAndPositions, TestNewestSegment, TestTerm, TestCodecHoldsOpenFiles, 
TestPagedBytes, TestPackedInts, TestBasics, TestNRTThreads, TestStressAdvance, 
TestSearchAfter, TestHighCompressionMode, TestDocumentsWriterStallControl, 
TestStressIndexing, TestSnapshotDeletionPolicy, TestNRTReaderWithThreads, 
TestTieredMergePolicy, TestLevenshteinAutomata, TestWeakIdentityMap, 
TestRegexpRandom2, TestSegmentTermDocs, TestPerFieldPostingsFormat2, 
TestMultiDocValues, TestHugeRamFile, TestLazyProxSkipping, TestDeterminism, 
TestBytesRefHash, TestNearSpansOrdered, TestTermRangeQuery, TestDocumentWriter, 
TestCrashCausesCorruptIndex, TestLiveFieldValues, TestFuzzyQuery, 
TestAutomatonQuery, TestMultiLevelSkipList, TestCheckIndex, TestConjunctions, 
TestVirtualMethod, TestSearch, TestDateTools, TestDocCount, 
TestAttributeSource, TestIsCurrent, TestIndexWriterLockRelease, 
TestByteBlockPool, TestDemo, TestRollback, MultiCollectorTest, 
TestSimpleAttributeImpl, TestByteArrayDataInput, TestPackedTokenAttributeImpl, 
TestForUtil, TestLucene50StoredFieldsFormatHighCompression, TestFieldType, 
Test2BSortedDocValuesFixedSorted, Test2BSortedDocValuesOrds, 
TestCustomTermFreq, TestDocIDMerger, TestDocValues, TestDocsWithFieldSet, 
TestExitableDirectoryReader, TestFieldInvertState, TestFieldReuse, 
TestFilterDirectoryReader, TestIndexReaderClose, TestIndexWriterOnVMError, 
TestInfoStream, TestMergePolicyWrapper, TestMixedDocValuesUpdates]
{nofor

[jira] [Created] (LUCENE-7887) TestIndexWriterWithThreads.testIOExceptionDuringWriteSegmentWithThreadsOnlyOnce() failure: numTerms2=0 vs -8

2017-06-28 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7887:
--

 Summary: 
TestIndexWriterWithThreads.testIOExceptionDuringWriteSegmentWithThreadsOnlyOnce()
 failure: numTerms2=0 vs -8
 Key: LUCENE-7887
 URL: https://issues.apache.org/jira/browse/LUCENE-7887
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe


Non-reproducing failure from 
[https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1338/]:

{noformat}
Checking out Revision 9f56698d33d1db9fab6a0d6f63b360b334f71583 
(refs/remotes/origin/master)
[...]
   [junit4] Suite: org.apache.lucene.index.TestIndexWriterWithThreads
   [junit4]   1> Thread-12393: ERROR: unexpected Throwable:
   [junit4]   1> java.lang.AssertionError: numTerms2=0 vs -8
   [junit4]   1>at 
org.apache.lucene.index.BufferedUpdatesStream.checkDeleteStats(BufferedUpdatesStream.java:376)
   [junit4]   1>at 
org.apache.lucene.index.BufferedUpdatesStream.push(BufferedUpdatesStream.java:85)
   [junit4]   1>at 
org.apache.lucene.index.IndexWriter.publishFrozenUpdates(IndexWriter.java:2655)
   [junit4]   1>at 
org.apache.lucene.index.DocumentsWriterFlushQueue$FlushTicket.finishFlush(DocumentsWriterFlushQueue.java:205)
   [junit4]   1>at 
org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:248)
   [junit4]   1>at 
org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:116)
   [junit4]   1>at 
org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:138)
   [junit4]   1>at 
org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:200)
   [junit4]   1>at 
org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:5004)
   [junit4]   1>at 
org.apache.lucene.index.DocumentsWriter$ForcedPurgeEvent.process(DocumentsWriter.java:751)
   [junit4]   1>at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5061)
   [junit4]   1>at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5049)
   [junit4]   1>at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1719)
   [junit4]   1>at 
org.apache.lucene.index.TestIndexWriterWithThreads$IndexerThread.run(TestIndexWriterWithThreads.java:86)
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestIndexWriterWithThreads 
-Dtests.method=testIOExceptionDuringWriteSegmentWithThreadsOnlyOnce 
-Dtests.seed=66484822FD0371B5 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-CR -Dtests.timezone=America/Manaus -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.06s J2 | 
TestIndexWriterWithThreads.testIOExceptionDuringWriteSegmentWithThreadsOnlyOnce 
<<<
   [junit4]> Throwable #1: java.lang.AssertionError: hit unexpected 
Throwable
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([66484822FD0371B5:2A23C9E43BA35250]:0)
   [junit4]>at 
org.apache.lucene.index.TestIndexWriterWithThreads._testMultipleThreadsFailure(TestIndexWriterWithThreads.java:300)
   [junit4]>at 
org.apache.lucene.index.TestIndexWriterWithThreads.testIOExceptionDuringWriteSegmentWithThreadsOnlyOnce(TestIndexWriterWithThreads.java:497)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterWithThreads_66484822FD0371B5-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{date=PostingsFormat(name=LuceneVarGapFixedInterval), 
field=PostingsFormat(name=LuceneVarGapFixedInterval), 
docid=PostingsFormat(name=Asserting), 
titleTokenized=PostingsFormat(name=LuceneFixedGap), 
body=PostingsFormat(name=LuceneVarGapFixedInterval), 
title=Lucene50(blocksize=128)}, docValues:{dv=DocValuesFormat(name=Direct), 
docid_intDV=DocValuesFormat(name=Memory), field=DocValuesFormat(name=Direct), 
titleDV=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=1926, 
maxMBSortInHeap=6.378832546318826, sim=RandomSimilarity(queryNorm=false): 
{field=DFR GB2, titleTokenized=DFR I(n)3(800.0), body=IB LL-D1}, locale=es-CR, 
timezone=America/Manaus
   [junit4]   2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 
1.8.0_131 (64-bit)/cpus=4,threads=1,free=120430360,total=441974784
   [junit4]   2> NOTE: All tests run in this JVM: [TestLucene70NormsFormat, 
TestOfflineSorter, TestPrefixQuery, TestStringMSBRadixSorter, TestFieldsReader, 
TestSynonymQuery, Test

[jira] [Commented] (SOLR-10353) TestSQLHandler reproducible failure: No match found for function signature min()

2017-06-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066500#comment-16066500
 ] 

Steve Rowe commented on SOLR-10353:
---

I looked at all the Jenkins failures for this since this issue was opened, and 
all that I can see the locale for (i.e. notification email contains it or the 
Jenkins log is still available) have one of these locales: {{az}}, {{az-AZ}}, 
{{az-Cyrl}}, {{az-Latn-AZ}}, {{az-Latn}}.

The most recent example, from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19998], reproduces 
for me on Java8:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
-Dtests.method=doTest -Dtests.seed=D3BEE760EAAD3B39 -Dtests.multiplier=3 
-Dtests.slow=true -Dtests.locale=az -Dtests.timezone=America/Grenada 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
   [junit4] ERROR   24.6s J2 | TestSQLHandler.doTest <<<
   [junit4]> Throwable #1: java.io.IOException: --> 
https://127.0.0.1:37433/collection1_shard2_replica_n0:Failed to execute 
sqlQuery 'select str_s, count(*), sum(field_i), min(field_i), max(field_i), 
avg(field_i) from collection1 where text='' group by str_s order by 
sum(field_i) asc limit 2' against JDBC connection 'jdbc:calcitesolr:'.
   [junit4]> Error while executing SQL "select str_s, count(*), 
sum(field_i), min(field_i), max(field_i), avg(field_i) from collection1 where 
text='' group by str_s order by sum(field_i) asc limit 2": From line 1, 
column 39 to line 1, column 50: No match found for function signature 
min()
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([D3BEE760EAAD3B39:74FA5FC487162880]:0)
   [junit4]>at 
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:219)
   [junit4]>at 
org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2527)
   [junit4]>at 
org.apache.solr.handler.TestSQLHandler.testBasicGrouping(TestSQLHandler.java:676)
   [junit4]>at 
org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:90)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
{noformat}

> TestSQLHandler reproducible failure: No match found for function signature 
> min()
> -
>
> Key: SOLR-10353
> URL: https://issues.apache.org/jira/browse/SOLR-10353
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Reporter: Hoss Man
>Assignee: Joel Bernstein
>
> found this while doing jdk9 testing, but the seed reproduces with jdk8 as 
> well...
> {noformat}
> hossman@tray:~/lucene/dev/solr/core [master] $ git rev-parse HEAD
> c221ef0fdedaa92885746b3073150f0bd558f596
> hossman@tray:~/lucene/dev/solr/core [master] $ ant test  
> -Dtestcase=TestSQLHandler -Dtests.method=doTest -Dtests.seed=D778831206956D34 
> -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=az-Cyrl-AZ 
> -Dtests.timezone=America/Cayman -Dtests.asserts=true 
> -Dtests.file.encoding=ANSI_X3.4-1968
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=D778831206956D34 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=America/Cayman 
> -Dtests.asserts=true -Dtests.file.encoding=ANSI_X3.4-1968
>[junit4] ERROR   28.0s | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: --> 
> http://127.0.0.1:37402/collection1:Failed to execute sqlQuery 'select str_s, 
> count(*), sum(field_i), min(field_i), max(field_i), cast(avg(1.0 * field_i) 
> as float) from collection1 where text='' group by str_s order by 
> sum(field_i) asc limit 2' against JDBC connection 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select str_s, count(*), 
> sum(field_i), min(field_i), max(field_i), cast(avg(1.0 * field_i) as float) 
> from collection1 where text='' group by str_s order by sum(field_i) asc 
> limit 2": From line 1, column 39 to line 1, column 50: No mat

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 931 - Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/931/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest

Error Message:
Timeout waiting for all live and active

Stack Trace:
java.lang.AssertionError: Timeout waiting for all live and active
at 
__randomizedtesting.SeedInfo.seed([90C3722736CEFF4C:13B52DD5E0B7F1ED]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11515 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestCloudRecovery
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.TestCloudRecovery_90C3722736CEFF4C-001/init-core-data-

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_131) - Build # 20001 - Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20001/
Java: 64bit/jdk1.8.0_131 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([EC12DF8EB267F17A:6446E0541C9B9C82]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at org.apache.solr.cloud.ReplaceNodeTest.test(ReplaceNodeTest.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.core.TestJmxIntegration.testJmxRegistration

Error Message:
org.apache.lucene.store.AlreadyClosedException: Already closed

Stack Trace:
javax.management.RuntimeMBeanException: 
org.apache.lucene.store.AlreadyClosedException: Already closed
at 
__randomizedtesting.SeedInfo.seed([EC12DF8EB267F17A:6

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 384 - Failure

2017-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/384/

3 tests failed.
FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([7FF4A5E34E6AC28C:DD2CEB392A98EB8A]:0)
at 
org.apache.lucene.util.packed.Packed8ThreeBlocks.(Packed8ThreeBlocks.java:41)
at 
org.apache.lucene.util.packed.PackedInts.getMutable(PackedInts.java:963)
at 
org.apache.lucene.util.packed.PackedInts.getMutable(PackedInts.java:939)
at 
org.apache.lucene.util.packed.GrowableWriter.ensureCapacity(GrowableWriter.java:80)
at 
org.apache.lucene.util.packed.GrowableWriter.set(GrowableWriter.java:88)
at 
org.apache.lucene.util.packed.AbstractPagedMutable.set(AbstractPagedMutable.java:98)
at org.apache.lucene.util.fst.NodeHash.addNew(NodeHash.java:152)
at org.apache.lucene.util.fst.NodeHash.rehash(NodeHash.java:169)
at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:133)
at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:214)
at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:310)
at org.apache.lucene.util.fst.Builder.add(Builder.java:414)
at 
org.apache.lucene.codecs.memory.MemoryDocValuesConsumer.writeFST(MemoryDocValuesConsumer.java:367)
at 
org.apache.lucene.codecs.memory.MemoryDocValuesConsumer.addSortedField(MemoryDocValuesConsumer.java:404)
at 
org.apache.lucene.codecs.DocValuesConsumer.mergeSortedField(DocValuesConsumer.java:653)
at 
org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:204)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.merge(PerFieldDocValuesFormat.java:153)
at 
org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:167)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:111)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4356)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3931)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2083)
at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:5005)
at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:731)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5043)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5034)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1574)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316)
at 
org.apache.lucene.index.TestIndexSorting.testRandom3(TestIndexSorting.java:2230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)


FAILED:  org.apache.solr.cloud.RollingRestartTest.test

Error Message:
Unable to restart (#3): CloudJettyRunner 
[url=https://127.0.0.1:51869/collection1]

Stack Trace:
java.lang.AssertionError: Unable to restart (#3): CloudJettyRunner 
[url=https://127.0.0.1:51869/collection1]
at 
__randomizedtesting.SeedInfo.seed([3CC48935CE3AFB62:B490B6EF60C6969A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.RollingRestartTest.restartWithRolesTest(RollingRestartTest.java:103)
at 
org.apache.solr.cloud.RollingRestartTest.test(RollingRestartTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.e

[jira] [Commented] (SOLR-6807) Make handleSelect=false by default

2017-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066464#comment-16066464
 ] 

Jan Høydahl commented on SOLR-6807:
---

Great work David.
Perhaps deprecate {{StandardRequestHandler}} now and remove in 8.0?

I can only find one RefGuide mention of StandardRequestHandler, and that is in 
http://lucene.apache.org/solr/guide/6_6/the-dismax-query-parser.html

> Make handleSelect=false by default
> --
>
> Key: SOLR-6807
> URL: https://issues.apache.org/jira/browse/SOLR-6807
> Project: Solr
>  Issue Type: Task
>Affects Versions: 4.10.2
>Reporter: Alexandre Rafalovitch
>Assignee: David Smiley
>Priority: Minor
>  Labels: solrconfig.xml
> Fix For: master (7.0)
>
> Attachments: SOLR_6807_handleSelect_false.patch, 
> SOLR_6807_handleSelect_false.patch, SOLR_6807_test_files.patch
>
>
> In the solrconfig.xml, we have a long explanation on the legacy 
> ** section. Since we are cleaning up 
> legacy stuff for version 5, is it safe now to flip handleSelect's default to 
> be *false* and therefore remove both the attribute and the whole section 
> explaining it?
> Then, a section in Reference Guide or even a blog post can explain what to do 
> for the old clients that still need it. But it does not seem to be needed 
> anymore for the new users. And possibly cause confusing now that we have 
> implicit, explicit and overlay handlers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5644) ThreadAffinityDocumentsWriterThreadPool should clear the bindings on flush

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066444#comment-16066444
 ] 

ASF GitHub Bot commented on LUCENE-5644:


Github user dsmiley commented on the issue:

https://github.com/apache/lucenenet/pull/208
  
Guys, the PR title here references LUCENE-5644 and this trigger's ASF 
JIRA-GitHub integration to link the conversation here to comments on that old 
issue.  Can you please edit the PR title?


> ThreadAffinityDocumentsWriterThreadPool should clear the bindings on flush
> --
>
> Key: LUCENE-5644
> URL: https://issues.apache.org/jira/browse/LUCENE-5644
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 4.8.1, 4.9, 6.0
>
> Attachments: LUCENE-5644.patch, LUCENE-5644.patch, LUCENE-5644.patch, 
> LUCENE-5644.patch, LUCENE-5644.patch
>
>
> This class remembers which thread used which DWPT, but it never clears
> this "affinity".  It really should clear it on flush, this way if the
> number of threads doing indexing has changed we only use as many DWPTs
> as there are incoming threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2017-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066420#comment-16066420
 ] 

Jan Høydahl commented on SOLR-10299:


Just gave it a shot and built a static search index at 
http://cominvent.com/solr/
The {{search-index.js}} file is ~600kb which is not too bad, but would perhaps 
want to make the index load in the background.
Try searching for e.g. "numVersionBuckets" or some other specific word and 
you'll see that it works since it is actually a full-text index.
You see the index itself in http://cominvent.com/solr/data/search-index.js

Benefit of this solution is that we can build the index as part of the static 
ref-guide build script (needs GoLang to build index) and then upload it with 
the site itself without any server-side installs. The ref-guide site is 25Mb 
and this index is 0.6Mb extra.

Of course this does not mean that the new ref-guide should not also be indexed 
by search-lucene.com or searchhub.lucidworks.com we can have both...

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1405 - Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1405/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest.test

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([C6A801DD8D5B1E02:4EFC3E0723A773FA]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.waitForState(ZkStateReader.java:1265)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.waitForState(CloudSolrClient.java:419)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11500 lines...]
   [junit4] Suite: org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.HdfsRecoveryZkTest_C6A801DD

[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2017-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066348#comment-16066348
 ] 

Jan Høydahl commented on SOLR-10299:


Perhaps the ref-guide is small enough to offer a client-side in-memory static 
index, see https://github.com/dchest/static-search  :-) The demo at 
https://www.codingrobots.com/search/ seems to satisfy the basic needs for a 
simple doc search. No idea how large the index would be given ~2Mb adoc 
content...

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 971 - Still Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/971/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.schema.TestUseDocValuesAsStored.testMultipleSearchResults

Error Message:
mismatch: 'myid1'!='myid' @ response/docs/[0]/id

Stack Trace:
java.lang.RuntimeException: mismatch: 'myid1'!='myid' @ response/docs/[0]/id
at 
__randomizedtesting.SeedInfo.seed([D8A5669ACB3054B1:EA8F610433CE7068]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:983)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:930)
at 
org.apache.solr.schema.TestUseDocValuesAsStored.testMultipleSearchResults(TestUseDocValuesAsStored.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13052 lines...]
   [junit4] Suite: org.apache.solr.schema.TestUseDocValuesAsStored
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-

[jira] [Commented] (SOLR-10329) Rebuild Solr examples

2017-06-28 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066303#comment-16066303
 ] 

Alexandre Rafalovitch commented on SOLR-10329:
--

This Jira was supposed to support a sort-of rethinking of the examples from the 
ground up. Unfortunately, due to work and personal commitments, I do not 
currently have the time to do it. But I still think it should stay as a 
placeholder for this higher-level effort and/or for somebody else to add their 
overall thoughts.

In a meanwhile, for the specific items, I think we should have individual 
Jiras. 

> Rebuild Solr examples
> -
>
> Key: SOLR-10329
> URL: https://issues.apache.org/jira/browse/SOLR-10329
> Project: Solr
>  Issue Type: Wish
>  Components: examples
>Reporter: Alexandre Rafalovitch
>  Labels: gsoc2017
>
> Apache Solr ships with a number of examples. They evolved from a kitchen sync 
> example and are rather large. When new Solr features are added, they are 
> often shoehorned into the most appropriate example and sometimes are not 
> represented at all. 
> Often, for new users, it is hard to tell what part of example is relevant, 
> what part is default and what part is demonstrating something completely 
> different.
> It would take significant (and very appreciated) effort to review all the 
> examples and rebuild them to provide clean way to showcase best practices 
> around base and most recent features.
> Specific issues are around kitchen sync vs. minimal examples, better approach 
> to "schemaless" mode and creating examples and datasets that allow to create 
> both "hello world" and more-advanced tutorials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10931) Resolve conflicting package names o.a.s.cloud.autoscaling

2017-06-28 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-10931.
--
Resolution: Fixed

> Resolve conflicting package names o.a.s.cloud.autoscaling
> -
>
> Key: SOLR-10931
> URL: https://issues.apache.org/jira/browse/SOLR-10931
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10931.patch
>
>
> At the moment, o.a.s.cloud.autoscaling is in core as well as in solrj modules.
> As per following comments, I think we should change them to different package 
> names:
> https://issues.apache.org/jira/browse/SOLR-9746?focusedCommentId=15658266&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15658266
> https://issues.apache.org/jira/browse/SOLR-9746?focusedCommentId=15654160&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15654160
> Currently, the Eclipse project is broken on master due to this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4111 - Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4111/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

71 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.AnalysisAfterCoreReloadTest

Error Message:
org.apache.solr.AnalysisAfterCoreReloadTest

Stack Trace:
java.lang.ClassNotFoundException: org.apache.solr.AnalysisAfterCoreReloadTest
at java.net.URLClassLoader$1.run(URLClassLoader.java:370)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.instantiate(SlaveMain.java:273)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:233)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:355)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13)
Caused by: java.io.FileNotFoundException: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/classes/test/org/apache/solr/AnalysisAfterCoreReloadTest.class
 (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
sun.misc.URLClassPath$FileLoader$1.getInputStream(URLClassPath.java:1288)
at sun.misc.Resource.cachedInputStream(Resource.java:77)
at sun.misc.Resource.getByteBuffer(Resource.java:160)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:454)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
... 12 more


FAILED:  junit.framework.TestSuite.org.apache.solr.BasicFunctionalityTest

Error Message:
org.apache.solr.BasicFunctionalityTest

Stack Trace:
java.lang.ClassNotFoundException: org.apache.solr.BasicFunctionalityTest
at java.net.URLClassLoader$1.run(URLClassLoader.java:370)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.instantiate(SlaveMain.java:273)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:233)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:355)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13)
Caused by: java.io.FileNotFoundException: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/classes/test/org/apache/solr/BasicFunctionalityTest.class
 (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
sun.misc.URLClassPath$FileLoader$1.getInputStream(URLClassPath.java:1288)
at sun.misc.Resource.cachedInputStream(Resource.java:77)
at sun.misc.Resource.getByteBuffer(Resource.java:160)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:454)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
... 12 more


FAILED:  junit.framework.TestSuite.org.apache.solr.DisMaxRequestHandlerTest

Error Message:
org.apache.solr.DisMaxRequestHandlerTest

Stack Trace:
java.lang.ClassNotFoundException: org.apache.solr.DisMaxRequestHandlerTest
at java.net.URLClassLoader$1.run(URLClassLoader.java:370)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.inst

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+175) - Build # 19999 - Still Unstable!

2017-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/1/
Java: 64bit/jdk-9-ea+175 -XX:+UseCompressedOops -XX:+UseG1GC 
--illegal-access=deny

6 tests failed.
FAILED:  org.apache.solr.core.TestJmxIntegration.testJmxRegistration

Error Message:
org.apache.lucene.store.AlreadyClosedException: Already closed

Stack Trace:
javax.management.RuntimeMBeanException: 
org.apache.lucene.store.AlreadyClosedException: Already closed
at 
__randomizedtesting.SeedInfo.seed([E5FB8DFA9118914F:6B2AE9C0FC59C92A]:0)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:829)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:842)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:645)
at 
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
org.apache.solr.core.TestJmxIntegration.testJmxRegistration(TestJmxIntegration.java:121)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Statem

[JENKINS] Lucene-Solr-Tests-6.x - Build # 974 - Unstable

2017-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/974/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [SolrIndexSearcher, 
MMapDirectory, MDCAwareThreadPoolExecutor, SolrCore, MMapDirectory, 
MMapDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:326)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2037)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2189)  at 
org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1071)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:949)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:930)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:565)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:91)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:728)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:923)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:930)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:565)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:859)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:930)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:565)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1019)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:930)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:565)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFa

[jira] [Commented] (SOLR-10397) Port 'autoAddReplicas' feature to the policy rules framework and make it work with non-shared filesystems

2017-06-28 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066146#comment-16066146
 ] 

Cao Manh Dat commented on SOLR-10397:
-

Committed to feature/autoscaling branch 
(https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=5232e36ce5503b79d635e01db810909ed4d3e40d)

> Port 'autoAddReplicas' feature to the policy rules framework and make it work 
> with non-shared filesystems
> -
>
> Key: SOLR-10397
> URL: https://issues.apache.org/jira/browse/SOLR-10397
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10397.1.patch, SOLR-10397.patch
>
>
> Currently 'autoAddReplicas=true' can be specified in the Collection Create 
> API to automatically add replicas when a replica becomes unavailable. I 
> propose to move this feature to the autoscaling cluster policy rules design.
> This will include the following:
> * Trigger support for ‘nodeLost’ event type
> * Modification of existing implementation of ‘autoAddReplicas’ to 
> automatically create the appropriate ‘nodeLost’ trigger.
> * Any such auto-created trigger must be marked internally such that setting 
> ‘autoAddReplicas=false’ via the Modify Collection API should delete or 
> disable corresponding trigger.
> * Support for non-HDFS filesystems while retaining the optimization afforded 
> by HDFS i.e. the replaced replica can point to the existing data dir of the 
> old replica.
> * Deprecate/remove the feature of enabling/disabling ‘autoAddReplicas’ across 
> the entire cluster using cluster properties in favor of using the 
> suspend-trigger/resume-trigger APIs.
> This will retain backward compatibility for the most part and keep a common 
> use-case easy to enable as well as make it available to more people (i.e. 
> people who don't use HDFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10397) Port 'autoAddReplicas' feature to the policy rules framework and make it work with non-shared filesystems

2017-06-28 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-10397:

Attachment: SOLR-10397.1.patch

Implementation of AutoAddReplicasPlanAction, waiting for SOLR-10965 to remove 
the old implementation of autoAddReplicas.

> Port 'autoAddReplicas' feature to the policy rules framework and make it work 
> with non-shared filesystems
> -
>
> Key: SOLR-10397
> URL: https://issues.apache.org/jira/browse/SOLR-10397
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10397.1.patch, SOLR-10397.patch
>
>
> Currently 'autoAddReplicas=true' can be specified in the Collection Create 
> API to automatically add replicas when a replica becomes unavailable. I 
> propose to move this feature to the autoscaling cluster policy rules design.
> This will include the following:
> * Trigger support for ‘nodeLost’ event type
> * Modification of existing implementation of ‘autoAddReplicas’ to 
> automatically create the appropriate ‘nodeLost’ trigger.
> * Any such auto-created trigger must be marked internally such that setting 
> ‘autoAddReplicas=false’ via the Modify Collection API should delete or 
> disable corresponding trigger.
> * Support for non-HDFS filesystems while retaining the optimization afforded 
> by HDFS i.e. the replaced replica can point to the existing data dir of the 
> old replica.
> * Deprecate/remove the feature of enabling/disabling ‘autoAddReplicas’ across 
> the entire cluster using cluster properties in favor of using the 
> suspend-trigger/resume-trigger APIs.
> This will retain backward compatibility for the most part and keep a common 
> use-case easy to enable as well as make it available to more people (i.e. 
> people who don't use HDFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6492) Solr field type that supports multiple, dynamic analyzers

2017-06-28 Thread Jan Rasehorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16028252#comment-16028252
 ] 

Jan Rasehorn edited comment on SOLR-6492 at 6/28/17 8:00 AM:
-

Hi Guys, this sounds like a solution for indexing a whole document when the 
document language is known upfront. 
But what if the language is not known upfront or if a document contains 
different text paragraphs with possibly different languages - like it can often 
be found in support tickets?

Since I did not like the approach using separate fields, I did it the following 
way:
1. I wrote a tokenizer that detects the paragraphs based on a given regexp (a 
result of cleaning up the support ticket text)
2. The tokenizer detects the paragraph language at runtime (using the solr 
built in language detector)
3. The tokenizer runs Open NLP POS tagging depending on the language it 
identified and saves the POS tags in the type attribute for each token. 
The language is stored as payload for each token.
4. I developed a "Delegating filter", which only delegates the "incrementToken" 
call to the filter (stemmer) if the payload value matched the filter value. 
This way I can configure in schema.xml, which stemmer to use for which language.

With this approach I do not depend on knowning the document language upfront.
What do you think?



was (Author: jan rasehorn):
Hi Guys, this sounds like a solution for indexing a whole document when the 
document language is known upfront. 
But what if the language is not known upfront or if a document contains 
different text paragraphs with possibly different languages - like it can often 
be found in support tickets?

Since I did not like the approach using separate fields, I did it the following 
way:
1. I wrote a tokenizer that detects the paragraphs based on a given regexp (a 
result of cleaning up the support ticket text)
2. The tokenizer detects the paragraph language at runtime (using the solr 
built in language detector)
3. The tokenizer runs part Open NLP POS tagging depending on the language it 
identified and saves the POS tags in the type attribute for each token. 
The language is stored as payload for each token.
4. I developed a "Delegating filter", which only delegates the "incrementToken" 
call to the filter (stemmer) if the payload value matched the filter value. 
This way I can configure in schema.xml, which stemmer to use for which language.

With this approach I do not depend on knowning the document language upfront.
What do you think?


> Solr field type that supports multiple, dynamic analyzers
> -
>
> Key: SOLR-6492
> URL: https://issues.apache.org/jira/browse/SOLR-6492
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Trey Grainger
> Fix For: 5.0
>
>
> A common request - particularly for multilingual search - is to be able to 
> support one or more dynamically-selected analyzers for a field. For example, 
> someone may have a "content" field and pass in a document in Greek (using an 
> Analyzer with Tokenizer/Filters for German), a separate document in English 
> (using an English Analyzer), and possibly even a field with mixed-language 
> content in Greek and English. This latter case could pass the content 
> separately through both an analyzer defined for Greek and another Analyzer 
> defined for English, stacking or concatenating the token streams based upon 
> the use-case.
> There are some distinct advantages in terms of index size and query 
> performance which can be obtained by stacking terms from multiple analyzers 
> in the same field instead of duplicating content in separate fields and 
> searching across multiple fields. 
> Other non-multilingual use cases may include things like switching to a 
> different analyzer for the same field to remove a feature (i.e. turning 
> on/off query-time synonyms against the same field on a per-query basis).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7x, and 7.0 branches

2017-06-28 Thread Anshum Gupta
Thanks for confirming that Alan, I had similar thoughts but wasn’t sure. 

I don’t want to change anything that I’m not confident about so I’m just going 
to create remove those and commit it to my fork. If someone who’s confident 
agrees with what I’m doing, I’ll go ahead and make those changes to the 
upstream :).

-Anshum



> On Jun 28, 2017, at 12:54 AM, Alan Woodward  wrote:
> 
> We don’t need to support lucene5x codecs in 7, so you should be able to just 
> remove those tests (and the the relevant packages from backwards-codecs too), 
> I think?
> 
> 
>> On 28 Jun 2017, at 08:38, Anshum Gupta > > wrote:
>> 
>> I tried to move forward to see this work before automatically computing the 
>> versions but I have about 30 odd failing test. I’ve made those changes and 
>> pushed to my local GitHub account in case you have the time to look: 
>> https://github.com/anshumg/lucene-solr 
>>  
>> 
>> Here’s the build summary if that helps:
>> 
>>[junit4] Tests with failures [seed: 31C3B60E557C7E14] (first 10 out of 
>> 31):
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testOutliers2
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testShortRange
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewValues
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFullLongRange
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testRamBytesUsed
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewLargeValues
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testByteRange
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testLongRange
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene50.TestLucene50SegmentInfoFormat.testRandomExceptions
>>[junit4]   - 
>> org.apache.lucene.codecs.lucene62.TestLucene62SegmentInfoFormat.testRandomExceptions
>>[junit4] 
>>[junit4] 
>>[junit4] JVM J0: 0.56 .. 9.47 = 8.91s
>>[junit4] JVM J1: 0.56 .. 4.13 = 3.57s
>>[junit4] JVM J2: 0.56 ..47.28 =46.73s
>>[junit4] JVM J3: 0.56 .. 3.89 = 3.33s
>>[junit4] Execution time total: 47 seconds
>>[junit4] Tests summary: 8 suites, 215 tests, 30 errors, 1 failure, 24 
>> ignored (24 assumptions)
>> 
>> 
>> -Anshum
>> 
>> 
>> 
>>> On Jun 27, 2017, at 4:15 AM, Adrien Grand >> > wrote:
>>> 
>>> The test***BackwardCompatibility cases can be removed since they make sure 
>>> that Lucene 7 can read Lucene 6 norms, while Lucene 8 doesn't have to be 
>>> able to read Lucene 6 norms.
>>> 
>>> TestSegmentInfos needs to be adapted to the new versions, we need to 
>>> replace 5 with 6 and 8 with 9. Maybe we should compute those numbers 
>>> automatically based on Version.LATEST.major so that it does not require 
>>> manual changes when moving to a new major version. That would give 5 -> 
>>> Version.LATEST.major-2 and 8 -> Version.LATEST.major+1.
>>> 
>>> I can do those changes on Thursday if you don't feel comfortable doing them.
>>> 
>>> 
>>> 
>>> Le mar. 27 juin 2017 à 08:12, Anshum Gupta >> > a écrit :
>>> Without making any changes at all and just bumping up the version, I hit 
>>> these errors when running the tests:
>>> 
>>>[junit4]   2> NOTE: reproduce with: ant test  
>>> -Dtestcase=TestSegmentInfos -Dtests.method=testIllegalCreatedVersion 
>>> -Dtests.seed=C818A61FA6C293A1 -Dtests.slow=true -Dtests.locale=es-PR 
>>> -Dtests.timezone=Etc/GMT+4 -Dtests.asserts=true 
>>> -Dtests.file.encoding=US-ASCII
>>>[junit4] FAILURE 0.01s J0 | TestSegmentInfos.testIllegalCreatedVersion 
>>> <<<
>>>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
>>> Expected exception IllegalArgumentException but no exception was thrown
>>>[junit4]>at 
>>> __randomizedtesting.SeedInfo.seed([C818A61FA6C293A1:CE340683BE44C211]:0)
>>>[junit4]>at 
>>> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2672)
>>>[junit4]>at 
>>> org.apache.lucene.index.TestSegmentInfos.testIllegalCreatedVersion(TestSegmentInfos.java:35)
>>>[junit4]>at java.lang.Thread.run(Thread.java:748)
>>>[junit4]   2> NOTE: reproduce with: ant test  
>>> -Dtestcase=TestSegmentInfos -Dtests.method=testVersionsOneSegment 
>>> -Dtests.seed=C818A61FA6C293A1 -Dtests.slow=true -Dtests.locale=es-PR 
>>> -Dtests.timezone=Etc/GMT+4 -Dtests.asserts=true 
>>> -Dtests.file.encoding=US-ASCII
>>>[junit4] ERROR   0.00s J0 | TestSegmentInfos.testVersionsOneSegment <<<
>>>[junit4]> Throwable #1: 
>>> org.apache.lucene.index.CorruptIndexException: segments file recorded 
>>> indexCreatedVersionMajor=8 but segment=_0(7.0.0):C1 has older version=7.0.0 
>>> (resource=Bu

Re: 7x, and 7.0 branches

2017-06-28 Thread Alan Woodward
We don’t need to support lucene5x codecs in 7, so you should be able to just 
remove those tests (and the the relevant packages from backwards-codecs too), I 
think?


> On 28 Jun 2017, at 08:38, Anshum Gupta  wrote:
> 
> I tried to move forward to see this work before automatically computing the 
> versions but I have about 30 odd failing test. I’ve made those changes and 
> pushed to my local GitHub account in case you have the time to look: 
> https://github.com/anshumg/lucene-solr 
>  
> 
> Here’s the build summary if that helps:
> 
>[junit4] Tests with failures [seed: 31C3B60E557C7E14] (first 10 out of 31):
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testOutliers2
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testShortRange
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewValues
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFullLongRange
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testRamBytesUsed
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewLargeValues
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testByteRange
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testLongRange
>[junit4]   - 
> org.apache.lucene.codecs.lucene50.TestLucene50SegmentInfoFormat.testRandomExceptions
>[junit4]   - 
> org.apache.lucene.codecs.lucene62.TestLucene62SegmentInfoFormat.testRandomExceptions
>[junit4] 
>[junit4] 
>[junit4] JVM J0: 0.56 .. 9.47 = 8.91s
>[junit4] JVM J1: 0.56 .. 4.13 = 3.57s
>[junit4] JVM J2: 0.56 ..47.28 =46.73s
>[junit4] JVM J3: 0.56 .. 3.89 = 3.33s
>[junit4] Execution time total: 47 seconds
>[junit4] Tests summary: 8 suites, 215 tests, 30 errors, 1 failure, 24 
> ignored (24 assumptions)
> 
> 
> -Anshum
> 
> 
> 
>> On Jun 27, 2017, at 4:15 AM, Adrien Grand > > wrote:
>> 
>> The test***BackwardCompatibility cases can be removed since they make sure 
>> that Lucene 7 can read Lucene 6 norms, while Lucene 8 doesn't have to be 
>> able to read Lucene 6 norms.
>> 
>> TestSegmentInfos needs to be adapted to the new versions, we need to replace 
>> 5 with 6 and 8 with 9. Maybe we should compute those numbers automatically 
>> based on Version.LATEST.major so that it does not require manual changes 
>> when moving to a new major version. That would give 5 -> 
>> Version.LATEST.major-2 and 8 -> Version.LATEST.major+1.
>> 
>> I can do those changes on Thursday if you don't feel comfortable doing them.
>> 
>> 
>> 
>> Le mar. 27 juin 2017 à 08:12, Anshum Gupta > > a écrit :
>> Without making any changes at all and just bumping up the version, I hit 
>> these errors when running the tests:
>> 
>>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSegmentInfos 
>> -Dtests.method=testIllegalCreatedVersion -Dtests.seed=C818A61FA6C293A1 
>> -Dtests.slow=true -Dtests.locale=es-PR -Dtests.timezone=Etc/GMT+4 
>> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
>>[junit4] FAILURE 0.01s J0 | TestSegmentInfos.testIllegalCreatedVersion <<<
>>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
>> Expected exception IllegalArgumentException but no exception was thrown
>>[junit4]> at 
>> __randomizedtesting.SeedInfo.seed([C818A61FA6C293A1:CE340683BE44C211]:0)
>>[junit4]> at 
>> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2672)
>>[junit4]> at 
>> org.apache.lucene.index.TestSegmentInfos.testIllegalCreatedVersion(TestSegmentInfos.java:35)
>>[junit4]> at java.lang.Thread.run(Thread.java:748)
>>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSegmentInfos 
>> -Dtests.method=testVersionsOneSegment -Dtests.seed=C818A61FA6C293A1 
>> -Dtests.slow=true -Dtests.locale=es-PR -Dtests.timezone=Etc/GMT+4 
>> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
>>[junit4] ERROR   0.00s J0 | TestSegmentInfos.testVersionsOneSegment <<<
>>[junit4]> Throwable #1: 
>> org.apache.lucene.index.CorruptIndexException: segments file recorded 
>> indexCreatedVersionMajor=8 but segment=_0(7.0.0):C1 has older version=7.0.0 
>> (resource=BufferedChecksumIndexInput(MockIndexInputWrapper(RAMInputStream(name=segments_1
>>[junit4]> at 
>> __randomizedtesting.SeedInfo.seed([C818A61FA6C293A1:A7477EE8875F2E36]:0)
>>[junit4]> at 
>> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:392)
>>[junit4]> at 
>> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
>>[junit4]> at 
>> org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:443)
>>[junit4]> at 
>> org.apache.

[jira] [Comment Edited] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-06-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066073#comment-16066073
 ] 

Ishan Chattopadhyaya edited comment on SOLR-10272 at 6/28/17 7:48 AM:
--

It seems bin/solr's create_core for standalone Solr is not working due to this. 
There's no unit test for this at the moment. I'm looking into it.


was (Author: ichattopadhyaya):
It seems bin/solr's create_core for standalone Solr is not working due to this. 
There's no unit test for this at the moment.

> Use a default configset and make the configName parameter optional.
> ---
>
> Key: SOLR-10272
> URL: https://issues.apache.org/jira/browse/SOLR-10272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10272.patch, SOLR-10272.patch.gz, 
> SOLR-10272.patch.gz, SOLR-10272.patch.gz
>
>
> This Jira's motivation is to improve the creating a collection experience 
> better for users.
> To create a collection we need to specify a configName that needs to be 
> present in ZK. When a new user is starting Solr why should he worry about 
> having to know about configsets before he can can create a collection.
> When you create a collection using "bin/solr create" the script uploads a 
> configset and references it. This is great. We should extend this idea to API 
> users as well.
> So here is the rough outline of what I think we can do here:
> 1. When you start solr , the bin script checks to see if 
> "/configs/_baseConfigSet" znode is present . If not it uploads the 
> "basic_configs". 
> We can discuss if its the "basic_configs" or something other default config 
> set. 
> Also we can discuss the name for "/_baseConfigSet". Moving on though
> 2. When a user creates a collection from the API  
> {{admin/collections?action=CREATE&name=gettingstarted}} here is what we do :
> Use https://cwiki.apache.org/confluence/display/solr/ConfigSets+API to copy 
> over the default config set to a configset with the name of the collection 
> specified.
> collection.configName can truly be an optional parameter. If its specified we 
> don't need to do this step.
> 3. Have the bin scripts use this and remove the logic built in there to do 
> the same thing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



<    1   2   3   >