[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 622 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/622/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.FieldFacetCloudTest

Error Message:
org.apache.http.ParseException: Invalid content type: 

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.ParseException: Invalid content type: 
at __randomizedtesting.SeedInfo.seed([890F42BAF1B213B3]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:523)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.analytics.facet.AbstractAnalyticsFacetCloudTest.setupCluster(AbstractAnalyticsFacetCloudTest.java:58)
at 
org.apache.solr.analytics.facet.FieldFacetCloudTest.beforeClass(FieldFacetCloudTest.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.http.ParseException: Invalid content type: 
at org.apache.http.entity.ContentType.parse(ContentType.java:298)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:574)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
... 32 more


FAILED:  org.apache.solr.cloud.TestOnReconnectListenerSupport.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:43201

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:43201
at 
__randomizedtesting.SeedInfo.seed([4FCBD03BE1C59DBA:C79FEFE14F39F042]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:637)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1477 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1477/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) 
Thread[id=9535, name=jetty-launcher-1437-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=9545, name=jetty-launcher-1437-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 
   1) Thread[id=9535, name=jetty-launcher-1437-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(

[jira] [Reopened] (SOLR-11494) Expected mime type application/octet-stream but got text/html

2017-10-16 Thread khawaja MUHAMMAD Shoaib (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

khawaja MUHAMMAD Shoaib reopened SOLR-11494:


Solr is throwing exception when connect through Spring data solr. 

> Expected mime type application/octet-stream but got text/html
> -
>
> Key: SOLR-11494
> URL: https://issues.apache.org/jira/browse/SOLR-11494
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI, SolrJ
>Affects Versions: 6.5
> Environment: Windows 10
> Java jdk1.8.0_144
> Solr 6.5.0
> Spring Data Solr 2.0.5.RELEASE
> Spring Version 4.3.12.RELEASE
>Reporter: khawaja MUHAMMAD Shoaib
> Attachments: MerchantModel.java, MerchantRepository.java, 
> SolrConfig.java
>
>
> I have been following tutorial from below link to implement Spring data Solr 
> http://www.baeldung.com/spring-data-solr
> Attached is my config file, model and repository for spring data solr.
> when i make any query or save my model i receive the below exception.
> my solr is working fine when i ping from browser " 
> http://127.0.0.1:8983/solr/";
> {code:java}
>  MerchantModel model = new MerchantModel();
> model.setId("2");
> model.setLocation("31.5287,74.4121");
> model.setTitle("khawaja");
> merchantRepository.save(model);
> {code}
>  
> upon save i am getting the below exception 
> ###
> org.springframework.data.solr.UncategorizedSolrException: Error from server 
> at http://127.0.0.1:8983/solr: Expected mime type application/octet-stream 
> but got text/html. 
> 
> 
> Error 404 Not Found
> 
> HTTP ERROR 404
> Problem accessing /solr/update. Reason:
> Not Found
> 
> 
> ; nested exception is 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 404 Not Found
> 
> HTTP ERROR 404
> Problem accessing /solr/update. Reason:
> Not Found
> 
> 
> ###



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice

2017-10-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11297:
--
Fix Version/s: 6.6.2

> Message "Lock held by this virtual machine" during startup.  Solr is trying 
> to start some cores twice
> -
>
> Key: SOLR-11297
> URL: https://issues.apache.org/jira/browse/SOLR-11297
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
> Fix For: 7.0.1, 7.1, 6.6.2
>
> Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.patch, 
> SOLR-11297.sh, solr6_6-startup.log
>
>
> Sometimes when Solr is restarted, I get some "lock held by this virtual 
> machine" messages in the log, and the admin UI has messages about a failure 
> to open a new searcher.  It doesn't happen on all cores, and the list of 
> cores that have the problem changes on subsequent restarts.  The cores that 
> exhibit the problems are working just fine -- the first core load is 
> successful, the failure to open a new searcher is on a second core load 
> attempt, which fails.
> None of the cores in the system are sharing an instanceDir or dataDir.  This 
> has been verified several times.
> The index is sharded manually, and the servers are not running in cloud mode.
> One critical detail to this issue: The cores are all perfectly functional.  
> If somebody is seeing an error message that results in a core not working at 
> all, then it is likely a different issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7.x has a major issue when upgrading from 6x I think

2017-10-16 Thread Erick Erickson
Right. I guess my first e-mail was clumsily written.

Yes, that's all you need to do is the two steps you outlined.

You can get the same effect by
> starting with 6x (no changes) and create a collection
> changing legacyCloud to false
> restarting 6x

for the same reason.

On Mon, Oct 16, 2017 at 7:31 PM, Varun Thacker  wrote:
> Maybe I should have been more clear -
>
> bq. The problem is that legacyCloud now defaults to true, and 6x does _not_
>
> I was just correcting this statement with what the default are ( it's the
> reverse of what you have written )
>
>
> So if I understand correctly is this all you need to do to reproduce the
> problem:
>
> Start Solr 6.x and create a collection
> Sart Solr 7.x and point it to the same solr home
>
> And that because the defaults are now different the core fails to load?
>
> On Mon, Oct 16, 2017 at 7:06 PM, Erick Erickson 
> wrote:
>>
>> That's true. And that's my point. A user has defaults set in both and
>> can't use Solr with 7x. Are you saying that we've never supported switching
>> an existing collection created with legacyCloud set to true/default to
>> false? As in fail to load? Nowhere in the upgrade notes, for instance, is
>> any notice like "If upgrading existing 6x installations with legacyCloud set
>> to true/ default you must set it to true to use Solr.
>>
>> Effectively that means they can never get to legacyCloud=false without
>> hand-editing each and every core.properties file or starting over.
>>
>> On Oct 16, 2017 5:48 PM, "Varun Thacker"  wrote:
>>>
>>> Hi Erick,
>>>
>>> In Solr 6.x legacyCloud defaults to true
>>> In Solr 7.x legacyCloud defaults to false ( ZK as truth as per
>>> https://issues.apache.org/jira/browse/SOLR-8256 )
>>>
>>> On Mon, Oct 16, 2017 at 5:29 PM, Erick Erickson 
>>> wrote:

 Check me out to see if I'm hallucinating

 Create a collection with 6x _without_ changing legacyCloud (i.e. it
 defaults to "false).
 Try opening it with 7x
 BOOOM

 I'm seeing:
 Caused by: org.apache.solr.common.SolrException: No coreNodeName for

 CoreDescriptor[name=eoe_shard1_replica1;instanceDir=/Users/Erick/apache/solrJiras/branch_7x/solr/example/cloud/node1/solr/eoe_shard1_replica1]

 The problem is that legacyCloud now defaults to true, and 6x does
 _not_ save coreNodeName to core.properties files. However,
 ZkController.checkStateInZk requires that coreNodeName be non-null and
 it's read from core.properties.

 I get the exact same behavior when I create a collection in 6x then
 change legacyCloud to false and restart 6x Solr.

 I don't think this should hold up 7.1 because of the issue from last
 week, people affected by this can set legacyCloud=true to get by.

 Or I need to see an eye doctor.

 Raise a JIRA?

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

>>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11481) Ref guide page explaining nuances of the recovery process

2017-10-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206997#comment-16206997
 ] 

Varun Thacker commented on SOLR-11481:
--

1. The replica asks the leader for it's fingerprint and compares it to the 
local copy. The fingerprint is calculated on the index and we compare the max 
version number from the index. If the version for matches then the indexes are 
the same and we mark the replica as active.
2. If the highest version on the replica is behind the leader, then the replica 
asks for the last 100 ( default ) updates from the leader. 
3. If the replica is missing less than the 100 updates then it asks the leader 
for the specific missing updates and applies them locally
4. In the scenario that the replica has fallen behind over a 100 updates we 
resort to replication of indexes
5. In full replication, we compare each segment locally vs the leader and fetch 
those segments that are either missing or if the checksums don't match.

> Ref guide page explaining nuances of the recovery process
> -
>
> Key: SOLR-11481
> URL: https://issues.apache.org/jira/browse/SOLR-11481
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
>
> The Solr recovery process involves PeerSync , which has configuration 
> parameters to allow the number of records it should keep.
> If this fails we do a index replication where possibly we can throttle 
> replication 
> I think it's worth explaining to users what these configuration parameters 
> are and how does a node actually recover. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.2 RC1

2017-10-16 Thread Ishan Chattopadhyaya
Owing to the serious nature of the exploit being fixed with these
artifacts and that there are +1s from three PMC members and no -1s,
I'm going to close the voting now.

This vote has passed. Thanks to everyone who voted.

On Tue, Oct 17, 2017 at 1:32 AM, Anshum Gupta  wrote:

> Smoke tester is happy!
>
> +1
>
> SUCCESS! [0:45:12.883194]
>
> -Anshum
>
>
>
> On Oct 15, 2017, at 12:01 PM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
> Please vote for release candidate 1 for Lucene/Solr 6.6.2The artifacts can be 
> downloaded 
> from:https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.2-RC1-revdf4de29b55369876769bb741d687e47b67ff9613You
>  can run the smoke tester directly with this command:python3 -u 
> dev-tools/scripts/smokeTestRelease.py 
> \https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.2-RC1-revdf4de29b55369876769bb741d687e47b67ff9613Here's
>  my +1SUCCESS! [0:29:21.090759]
>
>
>


[jira] [Comment Edited] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-10-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206976#comment-16206976
 ] 

Cao Manh Dat edited comment on SOLR-11423 at 10/17/17 4:14 AM:
---

Should we modify the behavior here a little bit, instead of throw an 
IllegalStateException() (which can lead to many errors cause this is an 
unchecked exception) should we try first, if the queue is full, retry until 
timeout.
[~dragonsinth] I really want to hear about your cluster status after SOLR-11443 
get applied ( maybe we do not need this hard cap at all if the Overseer can 
process messages fast enough )


was (Author: caomanhdat):
Should we modify the behavior here a little bit, instead of throw an 
IllegalStateException() (which can lead to many errors cause this is an 
unchecked exception) should we try first, if the queue is full, retry until 
timeout.
[~dragonsinth] I really want to hear about your cluster status after SOLR-11443 
get applied.

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11447) ZkStateWriter should process commands in atomic

2017-10-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-11447.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> ZkStateWriter should process commands in atomic
> ---
>
> Key: SOLR-11447
> URL: https://issues.apache.org/jira/browse/SOLR-11447
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11447.patch, SOLR-11447.patch
>
>
> ZkStateWriter should process all the ZkWriteCommands correspond to a message 
> in atomic ( we are processing one by one command right now ). Some 
> ZkWriteCommands can get lost. Here is the case :
> 1. We process DOWNNODE message ( whatever message that produces multiple 
> ZkWriteComand ).
> 2. We poll that message from stateUpdateQueue and push it to workQueue ( for 
> backup ).
> 3. The DOWNNODE message is converted into multiple ZkWriteCommand
> 4. We enqueue one by one ZkWriteCommand into ZkStateWriter. Any command can 
> trigger flush, which calls the onWrite() callback to empty workQueue
> 5. The Overseer gets restarted, and the rest of ZkWriteCommands (which not 
> get processed in step 4) will be lost because the workQueue is empty now 
> (because onWrite() callback in step 4)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11443) Remove the usage of workqueue for Overseer

2017-10-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-11443.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> Remove the usage of workqueue for Overseer
> --
>
> Key: SOLR-11443
> URL: https://issues.apache.org/jira/browse/SOLR-11443
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11443.patch, SOLR-11443.patch, SOLR-11443.patch
>
>
> If we can remove the usage of workqueue, We can save a lot of IO blocking in 
> Overseer, hence boost performance a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-10-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206976#comment-16206976
 ] 

Cao Manh Dat commented on SOLR-11423:
-

Should we modify the behavior here a little bit, instead of throw an 
IllegalStateException() (which can lead to many errors cause this is an 
unchecked exception) should we try first, if the queue is full, retry until 
timeout.
[~dragonsinth] I really want to hear about your cluster status after SOLR-11443 
get applied.

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11443) Remove the usage of workqueue for Overseer

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206943#comment-16206943
 ] 

ASF subversion and git services commented on SOLR-11443:


Commit 58730dcd6751427cf901552b4453eb817dfc631c in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=58730dc ]

SOLR-11443: Update CHANGES.txt


> Remove the usage of workqueue for Overseer
> --
>
> Key: SOLR-11443
> URL: https://issues.apache.org/jira/browse/SOLR-11443
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11443.patch, SOLR-11443.patch, SOLR-11443.patch
>
>
> If we can remove the usage of workqueue, We can save a lot of IO blocking in 
> Overseer, hence boost performance a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11443) Remove the usage of workqueue for Overseer

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206942#comment-16206942
 ] 

ASF subversion and git services commented on SOLR-11443:


Commit 9fac59ef55a134ff363c8bc4f0e5589769cf4962 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9fac59e ]

SOLR-11443: Update CHANGES.txt


> Remove the usage of workqueue for Overseer
> --
>
> Key: SOLR-11443
> URL: https://issues.apache.org/jira/browse/SOLR-11443
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11443.patch, SOLR-11443.patch, SOLR-11443.patch
>
>
> If we can remove the usage of workqueue, We can save a lot of IO blocking in 
> Overseer, hence boost performance a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 61 - Failure

2017-10-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/61/

No tests ran.

Build Log:
[...truncated 28021 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (12.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.2.0-src.tgz...
   [smoker] 30.9 MB in 0.08 sec (373.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.0.tgz...
   [smoker] 69.6 MB in 0.20 sec (346.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.0.zip...
   [smoker] 80.0 MB in 0.23 sec (342.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6221 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6221 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   7.1.0
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 1484, in 
   [smoker] main()
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 1428, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 1466, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 622, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 774, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 1404, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:622:
 exec returned: 1

Total time: 156 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-11478) Solr should remove it's entry from live_nodes in zk immediately on shutdown and add after solr has loaded it's cores and is ready to serve requests.

2017-10-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206914#comment-16206914
 ] 

Cao Manh Dat edited comment on SOLR-11478 at 10/17/17 2:42 AM:
---

bq. AFAIK Solr node that doesn't have any replicas for a collection - forwards 
all requests for this collection to other nodes. Shouldn't Solr node with 
replicas in DOWN state do the same? Thus /live_nodes entry can be present and 
all its replicas be DOWN - but node still be operational. WDYT?

Yeah, that sounds reasonable, but should we handle this in another ticket? 
Cause It is a different solution. I still think that you should use 
{{states.json}} info so you can boost your performance and avoid unnecessary 
routing.


was (Author: caomanhdat):
bq. AFAIK Solr node that doesn't have any replicas for a collection - forwards 
all requests for this collection to other nodes. Shouldn't Solr node with 
replicas in DOWN state do the same? Thus /live_nodes entry can be present and 
all its replicas be DOWN - but node still be operational. WDYT?

Yeah, that sounds reasonable, but should we handle this in another ticket? 
Cause It is a different solution. 

> Solr should remove it's entry from live_nodes in zk immediately on shutdown 
> and add after solr has loaded it's cores and is ready to serve requests.
> 
>
> Key: SOLR-11478
> URL: https://issues.apache.org/jira/browse/SOLR-11478
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Binoy Dalal
>Priority: Minor
> Attachments: SOLR-11478.patch
>
>
> Solr currently, upon receiving the stop command, removes its entry from the 
> zk /live_nodes znode after it has finished processing all inflight requests, 
> just before finally shutting down.
> In this case, any applications that depend on a solr node's live_node entry 
> to decide whether or not to query it fail once the stop command has been 
> executed but solr has not yet fully shut down.
> Something similar occurs during startup of a solr node. The solr node seems 
> to add it's entry to the /live_nodes in zk once it is up but before it has 
> started accepting requests and once again, this causes dependent applications 
> to fail in a similar fashion.
> Hence, removal of the node entry and addition of the same to the zk 
> live_nodes immediately upon shutting down and at the very end upon coming up 
> respectively will greatly benefit applications that depend the live_nodes 
> znode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11478) Solr should remove it's entry from live_nodes in zk immediately on shutdown and add after solr has loaded it's cores and is ready to serve requests.

2017-10-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11478:

Attachment: SOLR-11478.patch

Patch for this ticket, all tests passed. Will commit soon.

> Solr should remove it's entry from live_nodes in zk immediately on shutdown 
> and add after solr has loaded it's cores and is ready to serve requests.
> 
>
> Key: SOLR-11478
> URL: https://issues.apache.org/jira/browse/SOLR-11478
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Binoy Dalal
>Priority: Minor
> Attachments: SOLR-11478.patch
>
>
> Solr currently, upon receiving the stop command, removes its entry from the 
> zk /live_nodes znode after it has finished processing all inflight requests, 
> just before finally shutting down.
> In this case, any applications that depend on a solr node's live_node entry 
> to decide whether or not to query it fail once the stop command has been 
> executed but solr has not yet fully shut down.
> Something similar occurs during startup of a solr node. The solr node seems 
> to add it's entry to the /live_nodes in zk once it is up but before it has 
> started accepting requests and once again, this causes dependent applications 
> to fail in a similar fashion.
> Hence, removal of the node entry and addition of the same to the zk 
> live_nodes immediately upon shutting down and at the very end upon coming up 
> respectively will greatly benefit applications that depend the live_nodes 
> znode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7.x has a major issue when upgrading from 6x I think

2017-10-16 Thread Varun Thacker
Maybe I should have been more clear -

bq. The problem is that legacyCloud now defaults to true, and 6x does _not_

I was just correcting this statement with what the default are ( it's the
reverse of what you have written )


So if I understand correctly is this all you need to do to reproduce the
problem:

   1. Start Solr 6.x and create a collection
   2. Sart Solr 7.x and point it to the same solr home

And that because the defaults are now different the core fails to load?

On Mon, Oct 16, 2017 at 7:06 PM, Erick Erickson 
wrote:

> That's true. And that's my point. A user has defaults set in both and
> can't use Solr with 7x. Are you saying that we've never supported switching
> an existing collection created with legacyCloud set to true/default to
> false? As in fail to load? Nowhere in the upgrade notes, for instance, is
> any notice like "If upgrading existing 6x installations with legacyCloud
> set to true/ default you must set it to true to use Solr.
>
> Effectively that means they can never get to legacyCloud=false without
> hand-editing each and every core.properties file or starting over.
>
> On Oct 16, 2017 5:48 PM, "Varun Thacker"  wrote:
>
>> Hi Erick,
>>
>> In Solr 6.x legacyCloud defaults to true
>> In Solr 7.x legacyCloud defaults to false ( ZK as truth as per
>> https://issues.apache.org/jira/browse/SOLR-8256 )
>>
>> On Mon, Oct 16, 2017 at 5:29 PM, Erick Erickson 
>> wrote:
>>
>>> Check me out to see if I'm hallucinating
>>>
>>> Create a collection with 6x _without_ changing legacyCloud (i.e. it
>>> defaults to "false).
>>> Try opening it with 7x
>>> BOOOM
>>>
>>> I'm seeing:
>>> Caused by: org.apache.solr.common.SolrException: No coreNodeName for
>>> CoreDescriptor[name=eoe_shard1_replica1;instanceDir=/Users/E
>>> rick/apache/solrJiras/branch_7x/solr/example/cloud/node1/sol
>>> r/eoe_shard1_replica1]
>>>
>>> The problem is that legacyCloud now defaults to true, and 6x does
>>> _not_ save coreNodeName to core.properties files. However,
>>> ZkController.checkStateInZk requires that coreNodeName be non-null and
>>> it's read from core.properties.
>>>
>>> I get the exact same behavior when I create a collection in 6x then
>>> change legacyCloud to false and restart 6x Solr.
>>>
>>> I don't think this should hold up 7.1 because of the issue from last
>>> week, people affected by this can set legacyCloud=true to get by.
>>>
>>> Or I need to see an eye doctor.
>>>
>>> Raise a JIRA?
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>


[jira] [Comment Edited] (SOLR-11478) Solr should remove it's entry from live_nodes in zk immediately on shutdown and add after solr has loaded it's cores and is ready to serve requests.

2017-10-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206914#comment-16206914
 ] 

Cao Manh Dat edited comment on SOLR-11478 at 10/17/17 2:28 AM:
---

bq. AFAIK Solr node that doesn't have any replicas for a collection - forwards 
all requests for this collection to other nodes. Shouldn't Solr node with 
replicas in DOWN state do the same? Thus /live_nodes entry can be present and 
all its replicas be DOWN - but node still be operational. WDYT?

Yeah, that sounds reasonable, but should we handle this in another ticket? 
Cause It is a different solution. 


was (Author: caomanhdat):
bq. AFAIK Solr node that doesn't have any replicas for a collection - forwards 
all requests for this collection to other nodes. Shouldn't Solr node with 
replicas in DOWN state do the same? Thus /live_nodes entry can be present and 
all its replicas be DOWN - but node still be operational. WDYT?

Yeah, that sounds reasonable, but should we handle this in another ticket?

> Solr should remove it's entry from live_nodes in zk immediately on shutdown 
> and add after solr has loaded it's cores and is ready to serve requests.
> 
>
> Key: SOLR-11478
> URL: https://issues.apache.org/jira/browse/SOLR-11478
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Binoy Dalal
>Priority: Minor
>
> Solr currently, upon receiving the stop command, removes its entry from the 
> zk /live_nodes znode after it has finished processing all inflight requests, 
> just before finally shutting down.
> In this case, any applications that depend on a solr node's live_node entry 
> to decide whether or not to query it fail once the stop command has been 
> executed but solr has not yet fully shut down.
> Something similar occurs during startup of a solr node. The solr node seems 
> to add it's entry to the /live_nodes in zk once it is up but before it has 
> started accepting requests and once again, this causes dependent applications 
> to fail in a similar fashion.
> Hence, removal of the node entry and addition of the same to the zk 
> live_nodes immediately upon shutting down and at the very end upon coming up 
> respectively will greatly benefit applications that depend the live_nodes 
> znode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11478) Solr should remove it's entry from live_nodes in zk immediately on shutdown and add after solr has loaded it's cores and is ready to serve requests.

2017-10-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206914#comment-16206914
 ] 

Cao Manh Dat commented on SOLR-11478:
-

bq. AFAIK Solr node that doesn't have any replicas for a collection - forwards 
all requests for this collection to other nodes. Shouldn't Solr node with 
replicas in DOWN state do the same? Thus /live_nodes entry can be present and 
all its replicas be DOWN - but node still be operational. WDYT?

Yeah, that sounds reasonable, but should we handle this in another ticket?

> Solr should remove it's entry from live_nodes in zk immediately on shutdown 
> and add after solr has loaded it's cores and is ready to serve requests.
> 
>
> Key: SOLR-11478
> URL: https://issues.apache.org/jira/browse/SOLR-11478
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Binoy Dalal
>Priority: Minor
>
> Solr currently, upon receiving the stop command, removes its entry from the 
> zk /live_nodes znode after it has finished processing all inflight requests, 
> just before finally shutting down.
> In this case, any applications that depend on a solr node's live_node entry 
> to decide whether or not to query it fail once the stop command has been 
> executed but solr has not yet fully shut down.
> Something similar occurs during startup of a solr node. The solr node seems 
> to add it's entry to the /live_nodes in zk once it is up but before it has 
> started accepting requests and once again, this causes dependent applications 
> to fail in a similar fashion.
> Hence, removal of the node entry and addition of the same to the zk 
> live_nodes immediately upon shutting down and at the very end upon coming up 
> respectively will greatly benefit applications that depend the live_nodes 
> znode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7.x has a major issue when upgrading from 6x I think

2017-10-16 Thread Erick Erickson
That's true. And that's my point. A user has defaults set in both and can't
use Solr with 7x. Are you saying that we've never supported switching an
existing collection created with legacyCloud set to true/default to false?
As in fail to load? Nowhere in the upgrade notes, for instance, is any
notice like "If upgrading existing 6x installations with legacyCloud set to
true/ default you must set it to true to use Solr.

Effectively that means they can never get to legacyCloud=false without
hand-editing each and every core.properties file or starting over.

On Oct 16, 2017 5:48 PM, "Varun Thacker"  wrote:

> Hi Erick,
>
> In Solr 6.x legacyCloud defaults to true
> In Solr 7.x legacyCloud defaults to false ( ZK as truth as per
> https://issues.apache.org/jira/browse/SOLR-8256 )
>
> On Mon, Oct 16, 2017 at 5:29 PM, Erick Erickson 
> wrote:
>
>> Check me out to see if I'm hallucinating
>>
>> Create a collection with 6x _without_ changing legacyCloud (i.e. it
>> defaults to "false).
>> Try opening it with 7x
>> BOOOM
>>
>> I'm seeing:
>> Caused by: org.apache.solr.common.SolrException: No coreNodeName for
>> CoreDescriptor[name=eoe_shard1_replica1;instanceDir=/Users/
>> Erick/apache/solrJiras/branch_7x/solr/example/cloud/node1/
>> solr/eoe_shard1_replica1]
>>
>> The problem is that legacyCloud now defaults to true, and 6x does
>> _not_ save coreNodeName to core.properties files. However,
>> ZkController.checkStateInZk requires that coreNodeName be non-null and
>> it's read from core.properties.
>>
>> I get the exact same behavior when I create a collection in 6x then
>> change legacyCloud to false and restart 6x Solr.
>>
>> I don't think this should hold up 7.1 because of the issue from last
>> week, people affected by this can set legacyCloud=true to get by.
>>
>> Or I need to see an eye doctor.
>>
>> Raise a JIRA?
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[jira] [Commented] (LUCENE-7994) Use int/int hash map for int taxonomy facet counts

2017-10-16 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206903#comment-16206903
 ] 

Robert Muir commented on LUCENE-7994:
-

I am confused about the heuristic, can you explain it?

{code}
return taxoReaderSize < 1024 || sumTotalHits < taxoReaderSize/10;
{code}

For the first condition, Isn't taxoReaderSize essentially the cardinality? Why 
would we want a sparse hashtable in this low-cardinality case, I would think 
the opposite (a simple array should be best, it will be small).

And the second condition confuses me too, because we seem to be comparing 
apples and oranges. Wouldn't we instead only look at sumTotalHits/maxDoc (what 
% of the docs the query matches) when taxoReaderSize > 1024k? If its only 10% 
of the docs in the collection, we infer that an array could be very wasteful... 
of course we don't know the distribution but its just a heuristic.



> Use int/int hash map for int taxonomy facet counts
> --
>
> Key: LUCENE-7994
> URL: https://issues.apache.org/jira/browse/LUCENE-7994
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-7994.patch
>
>
> Int taxonomy facets today always count into a dense {{int[]}}, which is 
> wasteful in cases where the number of unique facet labels is high and the 
> size of the current result set is small.
> I factored the native hash map from LUCENE-7927 and use a simple heuristic 
> (customizable by the user by subclassing) to decide up front whether to count 
> sparse or dense.  I also made loading of the large children and siblings 
> {{int[]}} lazy, so that they are only instantiated if you really need them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10651) Streaming Expressions statistical functions library

2017-10-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10651:
--
Attachment: SOLR_7_1_DOCS.patch

> Streaming Expressions statistical functions library
> ---
>
> Key: SOLR-10651
> URL: https://issues.apache.org/jira/browse/SOLR-10651
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
> Attachments: SOLR_7_1_DOCS.patch
>
>
> This is a ticket for organizing the new statistical programming features of 
> Streaming Expressions. It's also a place for the community to discuss what 
> functions are needed to support statistical programming. 
> Basic Syntax:
> {code}
> let(a = timeseries(...),
> b = timeseries(...),
> c = col(a, count(*)),
> d = col(b, count(*)),
> r = regress(c, d),
> tuple(p = predict(r, 50)))
> {code}
> The expression above is doing the following:
> 1) The let expression is setting variables (a, b, c, d, r).
> 2) Variables *a* and *b* are the output of timeseries() Streaming 
> Expressions. These will be stored in memory as lists of Tuples containing the 
> time series results.
> 3) Variables *c* and *d* are set using the *col* evaluator. The col evaluator 
> extracts a column of numbers from a list of tuples. In the example *col* is 
> extracting the count\(*\) field from the two time series result sets.
> 4) Variable *r* is the output from the *regress* evaluator. The regress 
> evaluator performs a simple regression analysis on two columns of numbers.
> 5) Once the variables are set, a single Streaming Expression is run by the 
> *let* expression. In the example the *tuple* expression is run. The tuple 
> expression outputs a single Tuple with name/value pairs. Any Streaming 
> Expression can be run by the *let* expression so this can be a complex 
> program. The streaming expression run by *let* has access to all the 
> variables defined earlier.
> 6) The tuple expression in the example has one name / value pair. The name 
> *p* is set to the output of the *predict* evaluator. The predict evaluator is 
> predicting the value of a dependent variable based on the independent 
> variable 50. The regression result stored in variable *r* is used to make the 
> prediction.
> 7) The output of this expression will be a single tuple with the value of the 
> predict function in the *p* field.
> The growing list of issues linked to this ticket are the array manipulation 
> and statistical functions that will form the basis of the stats library. The 
> vast majority of these functions are backed by algorithms in Apache Commons 
> Math. Other machine learning and math libraries will follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10784) Streaming Expressions machine learning functions library

2017-10-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10784:
--
Attachment: (was: SOLR_7_1_DOCS.patch)

> Streaming Expressions machine learning functions library
> 
>
> Key: SOLR-10784
> URL: https://issues.apache.org/jira/browse/SOLR-10784
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This is an umbrella ticket for Streaming Expression's machine learning 
> function library. It will be used in much the same way that SOLR-10651 is 
> being used for the statistical function library.
> In the beginning many of the tickets will be based on machine learning 
> functions in *Apache Commons Math*, but other ML and matrix math libraries 
> will also used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10784) Streaming Expressions machine learning functions library

2017-10-16 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10784:
--
Attachment: SOLR_7_1_DOCS.patch

> Streaming Expressions machine learning functions library
> 
>
> Key: SOLR-10784
> URL: https://issues.apache.org/jira/browse/SOLR-10784
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR_7_1_DOCS.patch
>
>
> This is an umbrella ticket for Streaming Expression's machine learning 
> function library. It will be used in much the same way that SOLR-10651 is 
> being used for the statistical function library.
> In the beginning many of the tickets will be based on machine learning 
> functions in *Apache Commons Math*, but other ML and matrix math libraries 
> will also used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11411) Re-order the Getting Started And Manging Solr sections

2017-10-16 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11411:
-
Attachment: SOLR-11411.patch

Here's another patch with the changes to the top-level nav.

> Re-order the Getting Started  And Manging Solr sections
> ---
>
> Key: SOLR-11411
> URL: https://issues.apache.org/jira/browse/SOLR-11411
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: RefGuideTopLevel.png, SOLR-11411.patch, 
> SOLR-11411.patch, SOLR-11411.patch
>
>
> Today under "Getting Started" we have a few pages that could belong to a 
> "DevOps" section
> - Solr Configuration Files
> - Solr Upgrade Notes
> - Taking Solr to Production
> - Upgrading a Solr Cluster
> Some pages from "Managing Solr" section would also fit into this
> Lastly the "Solr Control Script Reference" page could go under that as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release notes for Lucene/Solr 7.1

2017-10-16 Thread Alexandre Rafalovitch
I would improve if I understood the Jira. After rereading it several
times, it seems something along the lines of
"extended JSON Request API to also support all query parsers and their
nested parameters"? Except there is also something about alternative
representation of the must/should/not structures.

I also could not find anything in the Reference Guide for Request API
itself. The google link 404s and I can't seem to find the right page -
if it exists - by browsing or searching.

Regards,
   Alex.

http://www.solr-start.com/ - Resources for Solr users, new and experienced


On 16 October 2017 at 17:01, Shalin Shekhar Mangar
 wrote:
> Thanks Alexandre. How about "New JSON based Query DSL for Solr that
> builds on top of the existing JSON Request API and allows queries and
> filters to be specified in a nested JSON structure"? Please feel free
> to improve.
>
> On Tue, Oct 10, 2017 at 9:10 PM, Alexandre Rafalovitch
>  wrote:
>> I was quite surprised to see "New JSON based Query DSL for Solr". I
>> thought we already had one (unfinished?).
>>
>> This seems to be the reference to SOLR-11244 which does say it is an
>> extension of what we have. But also, in its turn, unfinished?
>>
>> It would be nice for the release notes to clarify that it is not
>> something completely new, but is an extension and how far it goes this
>> time. Unless that's in the Ref docs already and then we could mention
>> it.
>>
>> Regards,
>>Alex.
>> 
>> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>>
>>
>> On 10 October 2017 at 11:21, Shalin Shekhar Mangar  wrote:
>>> Hello,
>>>
>>> I've created drafts for the 7.1 release notes:
>>>
>>> Lucene: https://wiki.apache.org/lucene-java/ReleaseNote71
>>> Solr: https://wiki.apache.org/solr/ReleaseNote71
>>>
>>> Please review and edit as you see fit.
>>>
>>> --
>>> Regards,
>>> Shalin Shekhar Mangar.
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11411) Re-order the Getting Started And Manging Solr sections

2017-10-16 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11411:
-
Attachment: RefGuideTopLevel.png

I applied the patch and the changes mostly look good, but I made a couple 
additional changes to put the top level in a better order. I've attached a 
screenshot of those changes.

> Re-order the Getting Started  And Manging Solr sections
> ---
>
> Key: SOLR-11411
> URL: https://issues.apache.org/jira/browse/SOLR-11411
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: RefGuideTopLevel.png, SOLR-11411.patch, SOLR-11411.patch
>
>
> Today under "Getting Started" we have a few pages that could belong to a 
> "DevOps" section
> - Solr Configuration Files
> - Solr Upgrade Notes
> - Taking Solr to Production
> - Upgrading a Solr Cluster
> Some pages from "Managing Solr" section would also fit into this
> Lastly the "Solr Control Script Reference" page could go under that as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11495) Reduce the list of which query parsers are loaded by default

2017-10-16 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206889#comment-16206889
 ] 

Alexandre Rafalovitch commented on SOLR-11495:
--

What would enabling a disabled parser look like? Would that mean a flag passed 
in at startup?

P.s. Is there a reason the case description is instead in the "Docs Text" 
field? That feels new, if not strange.

> Reduce the list of which query parsers are loaded by default
> 
>
> Key: SOLR-11495
> URL: https://issues.apache.org/jira/browse/SOLR-11495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.0
>Reporter: Shawn Heisey
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9) - Build # 251 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/251/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC --illegal-access=deny

3 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
http://127.0.0.1:51123/solr/awhollynewcollection_0_shard2_replica_n2: 
ClusterState says we are the leader 
(http://127.0.0.1:51123/solr/awhollynewcollection_0_shard2_replica_n2), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:51123/solr/awhollynewcollection_0_shard2_replica_n2: 
ClusterState says we are the leader 
(http://127.0.0.1:51123/solr/awhollynewcollection_0_shard2_replica_n2), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([868634CD85ECFD27:CEF3407983DFD2B2]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:459)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.ja

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 621 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/621/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testHandlingOfStaleAlias

Error Message:
java.util.LinkedHashMap cannot be cast to org.apache.solr.common.util.NamedList

Stack Trace:
java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to 
org.apache.solr.common.util.NamedList
at 
__randomizedtesting.SeedInfo.seed([730AA583F48244C7:6377E24A01614450]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.fetchClusterState(HttpClusterStateProvider.java:142)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getClusterProperties(HttpClusterStateProvider.java:276)
at 
org.apache.solr.client.solrj.impl.ClusterStateProvider.getClusterProperty(ClusterStateProvider.java:65)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testHandlingOfStaleAlias(CloudSolrClientTest.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
a

Re: 7.x has a major issue when upgrading from 6x I think

2017-10-16 Thread Varun Thacker
Hi Erick,

In Solr 6.x legacyCloud defaults to true
In Solr 7.x legacyCloud defaults to false ( ZK as truth as per
https://issues.apache.org/jira/browse/SOLR-8256 )

On Mon, Oct 16, 2017 at 5:29 PM, Erick Erickson 
wrote:

> Check me out to see if I'm hallucinating
>
> Create a collection with 6x _without_ changing legacyCloud (i.e. it
> defaults to "false).
> Try opening it with 7x
> BOOOM
>
> I'm seeing:
> Caused by: org.apache.solr.common.SolrException: No coreNodeName for
> CoreDescriptor[name=eoe_shard1_replica1;instanceDir=/
> Users/Erick/apache/solrJiras/branch_7x/solr/example/cloud/
> node1/solr/eoe_shard1_replica1]
>
> The problem is that legacyCloud now defaults to true, and 6x does
> _not_ save coreNodeName to core.properties files. However,
> ZkController.checkStateInZk requires that coreNodeName be non-null and
> it's read from core.properties.
>
> I get the exact same behavior when I create a collection in 6x then
> change legacyCloud to false and restart 6x Solr.
>
> I don't think this should hold up 7.1 because of the issue from last
> week, people affected by this can set legacyCloud=true to get by.
>
> Or I need to see an eye doctor.
>
> Raise a JIRA?
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-11426) TestLazyCores fails too often

2017-10-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206832#comment-16206832
 ] 

Erick Erickson commented on SOLR-11426:
---

Since changing this there haven't been any failures on master, and only two on 
7x. My guess is that "something changed" in master to make this more frequent, 
but so far this seems to be on the right track.

This isn't the proper fix, but I want to let it bake a bit more to be certain 
we're not just getting lucky with not having failures on master.


> TestLazyCores fails too often
> -
>
> Key: SOLR-11426
> URL: https://issues.apache.org/jira/browse/SOLR-11426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Rather then re-opening SOLR-10101 I thought I'd start a new issue. I may have 
> to put some code up on Jenkins to test, last time I tried to get this to fail 
> locally I couldn't



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



7.x has a major issue when upgrading from 6x I think

2017-10-16 Thread Erick Erickson
Check me out to see if I'm hallucinating

Create a collection with 6x _without_ changing legacyCloud (i.e. it
defaults to "false).
Try opening it with 7x
BOOOM

I'm seeing:
Caused by: org.apache.solr.common.SolrException: No coreNodeName for
CoreDescriptor[name=eoe_shard1_replica1;instanceDir=/Users/Erick/apache/solrJiras/branch_7x/solr/example/cloud/node1/solr/eoe_shard1_replica1]

The problem is that legacyCloud now defaults to true, and 6x does
_not_ save coreNodeName to core.properties files. However,
ZkController.checkStateInZk requires that coreNodeName be non-null and
it's read from core.properties.

I get the exact same behavior when I create a collection in 6x then
change legacyCloud to false and restart 6x Solr.

I don't think this should hold up 7.1 because of the issue from last
week, people affected by this can set legacyCloud=true to get by.

Or I need to see an eye doctor.

Raise a JIRA?

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2122 - Still Unstable

2017-10-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2122/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testBasics

Error Message:
Error from server at 
http://127.0.0.1:42933/solr/testcollection_shard1_replica_n3: Expected mime 
type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/testcollection_shard1_replica_n3/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:42933/solr/testcollection_shard1_replica_n3: 
Expected mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/testcollection_shard1_replica_n3/update. Reason:
Can not find: /solr/testcollection_shard1_replica_n3/update
http://eclipse.org/jetty";>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([7BAA6BBC6B10BC3D:4672C59053FEE24D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:126)
at 
org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRu

[jira] [Commented] (SOLR-11487) Collection Alias metadata for time partitioned collections

2017-10-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206772#comment-16206772
 ] 

Noble Paul commented on SOLR-11487:
---

I prefer the approach of adding a collection_metadata key in the current 
aliases.json approach. That means fewer nodes . Every extra node is extra 
bookkeeping

> Collection Alias metadata for time partitioned collections
> --
>
> Key: SOLR-11487
> URL: https://issues.apache.org/jira/browse/SOLR-11487
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
> Attachments: SOLR_11487.patch
>
>
> SOLR-11299 outlines an approach to using a collection Alias to refer to a 
> series of collections of a time series. We'll need to store some metadata 
> about these time series collections, such as which field of the document 
> contains the timestamp to route on.
> The current {{/aliases.json}} is a Map with a key {{collection}} which is in 
> turn a Map of alias name strings to a comma delimited list of the collections.
> _If we change the comma delimited list to be another Map to hold the existing 
> list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) 
> will break_.  Although if it's configured with an HTTP Solr URL then it would 
> not break.  There's also some read/write hassle to worry about -- we may need 
> to continue to read an aliases.json in the older format.
> Alternatively, we could add a new map entry to aliases.json, say, 
> {{collection_metadata}} keyed by alias name?
> Perhaps another very different approach is to attach metadata to the 
> configset in use?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11495) Reduce the list of which query parsers are loaded by default

2017-10-16 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206765#comment-16206765
 ] 

Shawn Heisey commented on SOLR-11495:
-

If the outcome of this (after discussion and investigation) is just to remove 
the XML parser, I'm OK with that.

I do think it would be a good idea to take a close look at each parser enabled 
by default just to survey the functionality and make sure that nothing can get 
out.


> Reduce the list of which query parsers are loaded by default
> 
>
> Key: SOLR-11495
> URL: https://issues.apache.org/jira/browse/SOLR-11495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.0
>Reporter: Shawn Heisey
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non-

2017-10-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-7995.

   Resolution: Fixed
 Assignee: Steve Rowe
Fix Version/s: 6.6.3
   7.2
   7.0.2
   master (8.0)
   6.7
   7.1.1

> 'ant stage-maven-artifacts' should work from the top-level project directory, 
> and should provide a better error message when its 'maven.dist.dir' param 
> points to a non-existent directory
> --
>
> Key: LUCENE-7995
> URL: https://issues.apache.org/jira/browse/LUCENE-7995
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: 7.1.1, 6.7, master (8.0), 7.0.2, 7.2, 6.6.3
>
> Attachments: LUCENE-7995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-11491) HttpClusterStateProvider doesn't support retrieval of cluster properties

2017-10-16 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-11491:
-

[~ab]: this fix seems incomplete in some contexts...

From: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4231/
{noformat}
   [junit4]   2> 9456 INFO  
(TEST-CloudSolrClientTest.testHandlingOfStaleAlias-seed#[3B079852290A3E4A]) [   
 ] o.a.s.SolrTestCaseJ4 ###Ending testHandlingOfStaleAlias
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testHandlingOfStaleAlias -Dtests.seed=3B079852290A3E4A 
-Dtests.slow=true -Dtests.locale=es-VE -Dtests.timezone=America/Cancun 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.24s | CloudSolrClientTest.testHandlingOfStaleAlias <<<
   [junit4]> Throwable #1: java.lang.ClassCastException: 
java.util.LinkedHashMap cannot be cast to org.apache.solr.common.util.NamedList
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([3B079852290A3E4A:2B7ADF9BDCE93EDD]:0)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.fetchClusterState(HttpClusterStateProvider.java:142)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getClusterProperties(HttpClusterStateProvider.java:276)
   [junit4]>at 
org.apache.solr.client.solrj.impl.ClusterStateProvider.getClusterProperty(ClusterStateProvider.java:65)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testHandlingOfStaleAlias(CloudSolrClientTest.java:226)

{noformat}

> HttpClusterStateProvider doesn't support retrieval of cluster properties
> 
>
> Key: SOLR-11491
> URL: https://issues.apache.org/jira/browse/SOLR-11491
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.2, master (8.0)
>
>
> SOLR-11285 refactoring caused the folowing bug to appear when 
> {{CloudSolrClient}} uses {{HttpClusterStateProvider}}:
> {code}
> java.lang.UnsupportedOperationException: Fetching cluster properties not 
> supported using the HttpClusterStateProvider. ZkClientClusterStateProvider 
> can be used for this.
>   at 
> __randomizedtesting.SeedInfo.seed([53591E2E965F9457:432459E763BC94C0]:0)
>   at 
> org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getClusterProperties(HttpClusterStateProvider.java:254)
>   at 
> org.apache.solr.client.solrj.impl.ClusterStateProvider.getClusterProperty(ClusterStateProvider.java:65)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClientTest.testHandlingOfStaleAlias(CloudSolrClientTest.java:226)
> {code}
> CLUSTERSTATUS response already contains cluster properties under "properties" 
> key, so this simply needs to be used in {{HttpClusterStateProvider}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1476 - Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1476/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
https://127.0.0.1:38601/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(https://127.0.0.1:38601/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at 
https://127.0.0.1:38601/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(https://127.0.0.1:38601/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([DC05AB074DF5FF1F:9470DFB34BC6D08A]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:459)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.e

[JENKINS] Lucene-Solr-6.6-Linux (64bit/jdk1.8.0_144) - Build # 169 - Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/169/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Something is broken in the assert for no shards using the same indexDir - 
probably something was changed in the attributes published in the MBean of 
SolrCore : {}

Stack Trace:
java.lang.AssertionError: Something is broken in the assert for no shards using 
the same indexDir - probably something was changed in the attributes published 
in the MBean of SolrCore : {}
at 
__randomizedtesting.SeedInfo.seed([4366B07E1AF7A8B0:B13C4CA1CC48725]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.checkNoTwoShardsUseTheSameIndexDir(CollectionsAPIDistributedZkTest.java:646)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:524)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Comment Edited] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206223#comment-16206223
 ] 

Michael A. Alcorn edited comment on SOLR-11386 at 10/16/17 9:57 PM:


I should also clarify that I do not want a phrase query, i.e., the order of the 
tokens should not matter (I'm still learning the Solr jargon).


was (Author: malcorn_redhat):
I should also clarify that I do not want a phrase query, i.e., the order of the 
tokens should not matter (I'm still learning Solr jargon).

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 184 - Still Unstable

2017-10-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/184/

10 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([5952CD777F534844:F432797C626CE031]:0)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeMarkersRegistration

Error Message:
Path /autoscaling/nodeAdded/127.0.0.1:40137_solr wasn't created

Stack Trace:
java.lang.AssertionError: Path /autoscaling/nodeAdded/127.0.0.1:40137_solr 
wasn't created
at 
__randomizedtesting.SeedInfo.seed([5952CD777F534844:41E8457B716685AB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org

Re: Release notes for Lucene/Solr 7.1

2017-10-16 Thread Shalin Shekhar Mangar
Thanks Alexandre. How about "New JSON based Query DSL for Solr that
builds on top of the existing JSON Request API and allows queries and
filters to be specified in a nested JSON structure"? Please feel free
to improve.

On Tue, Oct 10, 2017 at 9:10 PM, Alexandre Rafalovitch
 wrote:
> I was quite surprised to see "New JSON based Query DSL for Solr". I
> thought we already had one (unfinished?).
>
> This seems to be the reference to SOLR-11244 which does say it is an
> extension of what we have. But also, in its turn, unfinished?
>
> It would be nice for the release notes to clarify that it is not
> something completely new, but is an extension and how far it goes this
> time. Unless that's in the Ref docs already and then we could mention
> it.
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 10 October 2017 at 11:21, Shalin Shekhar Mangar  wrote:
>> Hello,
>>
>> I've created drafts for the 7.1 release notes:
>>
>> Lucene: https://wiki.apache.org/lucene-java/ReleaseNote71
>> Solr: https://wiki.apache.org/solr/ReleaseNote71
>>
>> Please review and edit as you see fit.
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20684 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20684/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:
Error from server at http://127.0.0.1:36621/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:36621/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([2E61D6C44DEB47E2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:626)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:132)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete

Error Message:
Error from server at 
https://127.0.0.1:43467/solr/testcollection_shard1_replica_n2: Expected mime 
type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/testcollection_shard1_replica_n2/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:43467/solr/testcollection_shard1_replica_n2: 
Expected mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason:
Can not fi

[jira] [Commented] (SOLR-11487) Collection Alias metadata for time partitioned collections

2017-10-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206546#comment-16206546
 ] 

David Smiley commented on SOLR-11487:
-

Thanks for sharing your idea [~gus_heck].  This is an approach I didn't think 
of.  I'm concerned this is over-using ZooKeeper nodes for what could be a 
simple map instead.  It's not like the metadata on the collection (as 
associated with the alias) is going to change so often as to benefit from the 
ability to change some but not all of this metadata.  [~noble.paul] what do you 
think of this?

> Collection Alias metadata for time partitioned collections
> --
>
> Key: SOLR-11487
> URL: https://issues.apache.org/jira/browse/SOLR-11487
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
> Attachments: SOLR_11487.patch
>
>
> SOLR-11299 outlines an approach to using a collection Alias to refer to a 
> series of collections of a time series. We'll need to store some metadata 
> about these time series collections, such as which field of the document 
> contains the timestamp to route on.
> The current {{/aliases.json}} is a Map with a key {{collection}} which is in 
> turn a Map of alias name strings to a comma delimited list of the collections.
> _If we change the comma delimited list to be another Map to hold the existing 
> list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) 
> will break_.  Although if it's configured with an HTTP Solr URL then it would 
> not break.  There's also some read/write hassle to worry about -- we may need 
> to continue to read an aliases.json in the older format.
> Alternatively, we could add a new map entry to aliases.json, say, 
> {{collection_metadata}} keyed by alias name?
> Perhaps another very different approach is to attach metadata to the 
> configset in use?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 249 - Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/249/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testCreateDelete 
{seed=[67E18A66747C46B7:7CF17B2A828CBE87]}

Error Message:
Could not find collection : pull_replica_test_create_delete

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
pull_replica_test_create_delete
at 
__randomizedtesting.SeedInfo.seed([67E18A66747C46B7:7CF17B2A828CBE87]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:111)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:247)
at 
org.apache.solr.cloud.TestPullReplica.testCreateDelete(TestPullReplica.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.TestPullReplica.testCreateDelete 
{seed=[67E18A66747C46B7:C8A7C7D6B64E75AB

[jira] [Commented] (SOLR-11444) Improve Aliases.java and comma delimited collection list handling

2017-10-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206524#comment-16206524
 ] 

David Smiley commented on SOLR-11444:
-

Your change RE path & idx makes total sense Erick.

I'll improve this further as I indicated to do a double-resolution of Aliases 
(alias -> alias -> collection) to be supported and then test it works.  
Conditionally supporting and eventually removing support for it ought to be a 
separate issue.  This issue here is mostly a refactoring with minor 
improvements related to consistency: consistency of comma delimited 
collection/alias lists in the path, and ensuring CloudSolrClient can route an 
update to a multi-collection alias just as HttpSolrCall already can.

> Improve Aliases.java and comma delimited collection list handling
> -
>
> Key: SOLR-11444
> URL: https://issues.apache.org/jira/browse/SOLR-11444
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR-11444.patch, SOLR_11444_Aliases.patch, 
> SOLR_11444_Aliases.patch
>
>
> While starting to look at SOLR-11299 I noticed some brittleness in 
> assumptions about Strings that refer to a collection.  Sometimes they are in 
> fact references to comma separated lists, which appears was added with the 
> introduction of collection aliases (an alias can refer to a comma delimited 
> list).  So Java's type system kind of goes out the window when we do this.  
> In one case this leads to a bug -- CloudSolrClient will throw an NPE if you 
> try to write to such an alias.  Sending an update via HTTP will allow it and 
> send it to the first in the list.
> So this issue is about refactoring and some little improvements pertaining to 
> Aliases.java plus certain key spots that deal with collection references.  I 
> don't think I want to go as far as changing the public SolrJ API except to 
> adding documentation on what's possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11495) Reduce the list of which query parsers are loaded by default

2017-10-16 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206526#comment-16206526
 ] 

Yonik Seeley commented on SOLR-11495:
-

XML is the special case here... it's introduced security exploit after security 
exploit because of it's ability to make HTTP calls itself.
I think disabling other parsers is the wrong approach and will frustrate users 
while not really increasing security (they are not inherently less secure if 
you exclude XML).
In addition, the JSON query DSL depends on these qparsers (that's how it's 
boolean was implemented).
Many of these are "plugins" instead of "builtins" just out of a matter of 
convenience, and I'd argue they are inherently an integral part of the query 
language.

> Reduce the list of which query parsers are loaded by default
> 
>
> Key: SOLR-11495
> URL: https://issues.apache.org/jira/browse/SOLR-11495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.0
>Reporter: Shawn Heisey
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206515#comment-16206515
 ] 

ASF subversion and git services commented on LUCENE-7995:
-

Commit a2bef512bbfa55720b7440e51fa4afca73dd382e in lucene-solr's branch 
refs/heads/branch_7_1 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a2bef51 ]

LUCENE-7995: 'ant stage-maven-artifacts' should work from the top-level project 
directory, and should provide a better error message when its 'maven.dist.dir' 
param points to a non-existent directory


> 'ant stage-maven-artifacts' should work from the top-level project directory, 
> and should provide a better error message when its 'maven.dist.dir' param 
> points to a non-existent directory
> --
>
> Key: LUCENE-7995
> URL: https://issues.apache.org/jira/browse/LUCENE-7995
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206513#comment-16206513
 ] 

ASF subversion and git services commented on LUCENE-7995:
-

Commit 042bde3b5eab0606314db91899a8f7fb0cb93f7f in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=042bde3 ]

LUCENE-7995: 'ant stage-maven-artifacts' should work from the top-level project 
directory, and should provide a better error message when its 'maven.dist.dir' 
param points to a non-existent directory


> 'ant stage-maven-artifacts' should work from the top-level project directory, 
> and should provide a better error message when its 'maven.dist.dir' param 
> points to a non-existent directory
> --
>
> Key: LUCENE-7995
> URL: https://issues.apache.org/jira/browse/LUCENE-7995
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206514#comment-16206514
 ] 

ASF subversion and git services commented on LUCENE-7995:
-

Commit f5ec4c02e2dad6f8ca490670cb53e3bcf26e797e in lucene-solr's branch 
refs/heads/branch_7_0 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f5ec4c0 ]

LUCENE-7995: 'ant stage-maven-artifacts' should work from the top-level project 
directory, and should provide a better error message when its 'maven.dist.dir' 
param points to a non-existent directory


> 'ant stage-maven-artifacts' should work from the top-level project directory, 
> and should provide a better error message when its 'maven.dist.dir' param 
> points to a non-existent directory
> --
>
> Key: LUCENE-7995
> URL: https://issues.apache.org/jira/browse/LUCENE-7995
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206512#comment-16206512
 ] 

ASF subversion and git services commented on LUCENE-7995:
-

Commit 32288b0a0d6ed755774164379570171ffc63031b in lucene-solr's branch 
refs/heads/branch_6_6 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=32288b0 ]

LUCENE-7995: 'ant stage-maven-artifacts' should work from the top-level project 
directory, and should provide a better error message when its 'maven.dist.dir' 
param points to a non-existent directory


> 'ant stage-maven-artifacts' should work from the top-level project directory, 
> and should provide a better error message when its 'maven.dist.dir' param 
> points to a non-existent directory
> --
>
> Key: LUCENE-7995
> URL: https://issues.apache.org/jira/browse/LUCENE-7995
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206516#comment-16206516
 ] 

ASF subversion and git services commented on LUCENE-7995:
-

Commit f20676b01b4c81669bf464ee76d172558f6ba16b in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f20676b ]

LUCENE-7995: 'ant stage-maven-artifacts' should work from the top-level project 
directory, and should provide a better error message when its 'maven.dist.dir' 
param points to a non-existent directory


> 'ant stage-maven-artifacts' should work from the top-level project directory, 
> and should provide a better error message when its 'maven.dist.dir' param 
> points to a non-existent directory
> --
>
> Key: LUCENE-7995
> URL: https://issues.apache.org/jira/browse/LUCENE-7995
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206517#comment-16206517
 ] 

ASF subversion and git services commented on LUCENE-7995:
-

Commit dabb9ed3254a2f34595137f8087b6d64d5fcf7e0 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dabb9ed ]

LUCENE-7995: 'ant stage-maven-artifacts' should work from the top-level project 
directory, and should provide a better error message when its 'maven.dist.dir' 
param points to a non-existent directory


> 'ant stage-maven-artifacts' should work from the top-level project directory, 
> and should provide a better error message when its 'maven.dist.dir' param 
> points to a non-existent directory
> --
>
> Key: LUCENE-7995
> URL: https://issues.apache.org/jira/browse/LUCENE-7995
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.2 RC1

2017-10-16 Thread Anshum Gupta
Smoke tester is happy!

+1

SUCCESS! [0:45:12.883194]

-Anshum



> On Oct 15, 2017, at 12:01 PM, Ishan Chattopadhyaya 
>  wrote:
> 
> Please vote for release candidate 1 for Lucene/Solr 6.6.2
> 
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.2-RC1-revdf4de29b55369876769bb741d687e47b67ff9613
>  
> 
> 
> You can run the smoke tester directly with this command:
> 
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.2-RC1-revdf4de29b55369876769bb741d687e47b67ff9613
>  
> 
> 
> Here's my +1
> SUCCESS! [0:29:21.090759]
> 



[jira] [Updated] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non-e

2017-10-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7995:
---
Attachment: LUCENE-7995.patch

Patch, committing shortly.

> 'ant stage-maven-artifacts' should work from the top-level project directory, 
> and should provide a better error message when its 'maven.dist.dir' param 
> points to a non-existent directory
> --
>
> Key: LUCENE-7995
> URL: https://issues.apache.org/jira/browse/LUCENE-7995
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7995) 'ant stage-maven-artifacts' should work from the top-level project directory, and should provide a better error message when its 'maven.dist.dir' param points to a non-e

2017-10-16 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7995:
--

 Summary: 'ant stage-maven-artifacts' should work from the 
top-level project directory, and should provide a better error message when its 
'maven.dist.dir' param points to a non-existent directory
 Key: LUCENE-7995
 URL: https://issues.apache.org/jira/browse/LUCENE-7995
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11495) Reduce the list of which query parsers are loaded by default

2017-10-16 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206486#comment-16206486
 ] 

Gus Heck commented on SOLR-11495:
-

It would be nice if this were paired with a convenient but reasonably secure 
way to enable anything no longer included by default. By convenient, I mean 
centralized... i.e. not editing a file on every deployed node.

> Reduce the list of which query parsers are loaded by default
> 
>
> Key: SOLR-11495
> URL: https://issues.apache.org/jira/browse/SOLR-11495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.0
>Reporter: Shawn Heisey
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11495) Reduce the list of which query parsers are loaded by default

2017-10-16 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206467#comment-16206467
 ] 

Shawn Heisey commented on SOLR-11495:
-

This is how I think we should initially define the default list:

{code}
map.put(LuceneQParserPlugin.NAME, LuceneQParserPlugin.class);
map.put(FunctionQParserPlugin.NAME, FunctionQParserPlugin.class);
map.put(DisMaxQParserPlugin.NAME, DisMaxQParserPlugin.class);
map.put(ExtendedDismaxQParserPlugin.NAME, 
ExtendedDismaxQParserPlugin.class);
{code}

This list corresponds to these parser names:  lucene, func, dismax, edismax

I almost didn't include the function query parser in that list.  It is one of 
the more complex parsers we have, and therefore might be potentially vulnerable 
to exploit ... but I think it's probably so commonly used that it would break a 
lot of installs to remove it.

For a lot of the remaining parsers, there are strong arguments for inclusion in 
the default list, but anytime a parser is considered for inclusion, we need to 
weigh how widely used that parser is against the possible risks of increasing 
the attack surface.  Is the terms query parser likely to be exploitable?  That 
would take a code review to determine.


> Reduce the list of which query parsers are loaded by default
> 
>
> Key: SOLR-11495
> URL: https://issues.apache.org/jira/browse/SOLR-11495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.0
>Reporter: Shawn Heisey
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11495) Reduce the list of which query parsers are loaded by default

2017-10-16 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-11495:
---

 Summary: Reduce the list of which query parsers are loaded by 
default
 Key: SOLR-11495
 URL: https://issues.apache.org/jira/browse/SOLR-11495
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Affects Versions: 7.0
Reporter: Shawn Heisey






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Making "routing" strategies for writing segments explicit ?

2017-10-16 Thread David Smiley
Hi Tomaso,

It's definitely something I've pondered on occasion but I'm left wondering
(a) is it worth it (experimentation will tell), and (b) perhaps Lucene
doesn't need anything new here: see MultiReader. Arguably this can be
handled at the search server layer by constructing multiple IndexWriters
and then a MultiReader over their collective indexes.  Perhaps a special
IndexSearcher QueryCache could be developed to partition itself on the
separate underlying readers.  Of course it would probably take a lot of
work to retrofit, say Solr, to do this but I'm dubious Lucene should be
saddled with unneeded complexity for this.

On Thu, Oct 12, 2017 at 9:55 AM Tommaso Teofili 
wrote:

> Hi all,
>
> having been involved in such kind of challenge and having seen a few more
> similar enquiries on the dev list, I was wondering if it may be time to
> think about making it possible to have an explicit (customizable and
> therefore pluggable) policy which allows people to chime into where
> documents and / or segments get written (on write or on merge).
> Recently there was someone asking about possibly having segments sorted by
> a field using SortingMergePolicy, but as Uwe noted it's currently an
> implementation detail. Personally I have tried (and failed because it was
> too costly) to make sure docs belonging to certain clusters (identified by
> a field) being written within same segments (for data locality / memory
> footprint concerns when "loading" docs from a certain cluster).
>
> As of today that'd be *really* hard, but I just wanted to share my feeling
> that such topic might be something to keep an eye on.
>
> My 2 cents,
> Tommaso
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 620 - Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/620/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
22 threads leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler:   
  1) Thread[id=835, name=qtp905361542-835, state=RUNNABLE, 
group=TGRP-TestSQLHandler] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243)
 at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100)
 at org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=858, 
name=searcherExecutor-329-thread-1, state=WAITING, group=TGRP-TestSQLHandler]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=868, 
name=zkCallback-148-thread-5, state=TIMED_WAITING, group=TGRP-TestSQLHandler]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=845, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestSQLHandler]
 at java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)5) Thread[id=869, 
name=zkCallback-148-thread-6, state=TIMED_WAITING, group=TGRP-TestSQLHandler]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)6) Thread[id=836, 
name=qtp905361542-836-acceptor-0@6ceea1de-ServerConnector@47f87181{SSL,[ssl, 
http/1.1]}{127.0.0.1:40777}, state=RUNNABLE, group=TGRP-TestSQLHandler] 
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) 
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) 
at 
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371)   
  at 
org.eclipse.jet

[JENKINS] Lucene-Solr-NightlyTests-6.6 - Build # 31 - Still Unstable

2017-10-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.6/31/

7 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomBig

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([62216138BDCB1501]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.spatial3d.TestGeo3DPoint

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([62216138BDCB1501]:0)


FAILED:  
org.apache.solr.cloud.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates

Error Message:
Timeout while trying to assert number of documents @ source_collection

Stack Trace:
java.lang.AssertionError: Timeout while trying to assert number of documents @ 
source_collection
at 
__randomizedtesting.SeedInfo.seed([E1BDE91CA304AC18:32B4B902E697308F]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertNumDocs(BaseCdcrDistributedZkTest.java:271)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates(CdcrReplicationHandlerTest.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertio

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4231 - Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4231/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

6 tests failed.
FAILED:  org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([28BF6771F12F0696:963401DE885508A3]:0)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.getNodeName(JettySolrRunner.java:345)
at 
org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration(ClusterStateUpdateTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ClusterStateUpdateTest

Error Message:
17 threads leaked from SUITE scope at 
org.apache.solr.cloud.ClusterStateUpdateTest: 1) Thread[id=18186, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ClusterStateUpdateTest] at java.lang.Thread.sleep(Native 
Metho

[jira] [Updated] (SOLR-11411) Re-order the Getting Started And Manging Solr sections

2017-10-16 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11411:
-
Attachment: SOLR-11411.patch

{{ant clean build-site}} is successful with this patch. 

> Re-order the Getting Started  And Manging Solr sections
> ---
>
> Key: SOLR-11411
> URL: https://issues.apache.org/jira/browse/SOLR-11411
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-11411.patch, SOLR-11411.patch
>
>
> Today under "Getting Started" we have a few pages that could belong to a 
> "DevOps" section
> - Solr Configuration Files
> - Solr Upgrade Notes
> - Taking Solr to Production
> - Upgrading a Solr Cluster
> Some pages from "Managing Solr" section would also fit into this
> Lastly the "Solr Control Script Reference" page could go under that as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11411) Re-order the Getting Started And Manging Solr sections

2017-10-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206348#comment-16206348
 ] 

Varun Thacker commented on SOLR-11411:
--

Fixing the validation errors and uploading new patch shortly

> Re-order the Getting Started  And Manging Solr sections
> ---
>
> Key: SOLR-11411
> URL: https://issues.apache.org/jira/browse/SOLR-11411
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-11411.patch
>
>
> Today under "Getting Started" we have a few pages that could belong to a 
> "DevOps" section
> - Solr Configuration Files
> - Solr Upgrade Notes
> - Taking Solr to Production
> - Upgrading a Solr Cluster
> Some pages from "Managing Solr" section would also fit into this
> Lastly the "Solr Control Script Reference" page could go under that as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11411) Re-order the Getting Started And Manging Solr sections

2017-10-16 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11411:
-
Attachment: SOLR-11411.patch

Re-ordered the pages as proposed. It turned out to be quite a simple task.

[~ctargett] can you please verify the changes and then I'll go commit it

> Re-order the Getting Started  And Manging Solr sections
> ---
>
> Key: SOLR-11411
> URL: https://issues.apache.org/jira/browse/SOLR-11411
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-11411.patch
>
>
> Today under "Getting Started" we have a few pages that could belong to a 
> "DevOps" section
> - Solr Configuration Files
> - Solr Upgrade Notes
> - Taking Solr to Production
> - Upgrading a Solr Cluster
> Some pages from "Managing Solr" section would also fit into this
> Lastly the "Solr Control Script Reference" page could go under that as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.1.0 RC2

2017-10-16 Thread Shalin Shekhar Mangar
Owing to the serious nature of the exploit being fixed with these
artifacts and that there are +1s from three PMC members and no -1s,
I'm going to close the voting now.

This vote has passed. Thanks to everyone who voted.

On Sat, Oct 14, 2017 at 12:55 AM, Shalin Shekhar Mangar
 wrote:
> Please vote for release candidate 2 for Lucene/Solr 7.1.0
>
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.1.0-RC2-rev84c90ad2c0218156c840e19a64d72b8a38550659
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.1.0-RC2-rev84c90ad2c0218156c840e19a64d72b8a38550659
>
> Smoke tester passed for me.
> SUCCESS! [0:40:53.908967]
>
> Here's my +1 to release.
>
> --
> Regards,
> Shalin Shekhar Mangar.



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11055) Add 'commitWithin' testing (of both soft/hard commits) to SoftAutoCommitTest

2017-10-16 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-11055.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> Add 'commitWithin' testing (of both soft/hard commits) to SoftAutoCommitTest 
> -
>
> Key: SOLR-11055
> URL: https://issues.apache.org/jira/browse/SOLR-11055
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11055.patch, SOLR-11055.patch
>
>
> SoftAutoCommitTest should be enhanced with it's monitor based polling to also 
> check that commitWithin works just as well as autocommit maxTime for either 
> softCommit or hardCommit (can't test both at the same time due to how 
> commitWithin is configured)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11055) Add 'commitWithin' testing (of both soft/hard commits) to SoftAutoCommitTest

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206229#comment-16206229
 ] 

ASF subversion and git services commented on SOLR-11055:


Commit 54b63d17af4b39f85794678077019b4672a8f8d0 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=54b63d1 ]

SOLR-11055: Add 'commitWithin' testing (of both soft/hard commits) to 
SoftAutoCommitTest

(cherry picked from commit b21721f152b48317817bafc508066160864df4c3)


> Add 'commitWithin' testing (of both soft/hard commits) to SoftAutoCommitTest 
> -
>
> Key: SOLR-11055
> URL: https://issues.apache.org/jira/browse/SOLR-11055
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-11055.patch, SOLR-11055.patch
>
>
> SoftAutoCommitTest should be enhanced with it's monitor based polling to also 
> check that commitWithin works just as well as autocommit maxTime for either 
> softCommit or hardCommit (can't test both at the same time due to how 
> commitWithin is configured)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203861#comment-16203861
 ] 

Michael A. Alcorn edited comment on SOLR-11386 at 10/16/17 5:15 PM:


I was incorrect. The issue persists in Solr 6.6.0, however, I believe I've 
discovered a workaround. If you use:

{code}
{
"store": "redhat_efi_feature_store",
"name": "case_description_issue_tfidf",
"class": "org.apache.solr.ltr.feature.SolrFeature",
"params": {
"q":"{!dismax qf=text_tfidf}${text}"
}
}
{code}

instead of:

{code}
{
"store": "redhat_efi_feature_store",
"name": "case_description_issue_tfidf",
"class": "org.apache.solr.ltr.feature.SolrFeature",
"params": {
"q": "{!field f=issue_tfidf}${case_description}"
}
}
{code}

you can then use single quotes to incorporate multi-term arguments as 
[~alessandro.benedetti] suggested.


was (Author: malcorn_redhat):
-I just set up a local install of Solr 6.6.0 with a toy data set and tested 
multi-term EFI arguments using single quotes and it worked as expected. The 
issue seems to be isolated to older Solr versions. We'll upgrade our 
development version and see if that fixes it.-

I was incorrect. The issue persists in Solr 6.6.0, however, I believe I've 
discovered a workaround. If you use:

{code}
{
"store": "redhat_efi_feature_store",
"name": "case_description_issue_tfidf",
"class": "org.apache.solr.ltr.feature.SolrFeature",
"params": {
"q":"{!dismax qf=text_tfidf}${text}"
}
}
{code}

instead of:

{code}
{
"store": "redhat_efi_feature_store",
"name": "case_description_issue_tfidf",
"class": "org.apache.solr.ltr.feature.SolrFeature",
"params": {
"q": "{!field f=issue_tfidf}${case_description}"
}
}
{code}

you can then use single quotes to incorporate multi-term arguments as 
[~alessandro.benedetti] suggested.

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16176795#comment-16176795
 ] 

Michael A. Alcorn edited comment on SOLR-11386 at 10/16/17 5:14 PM:


-I just set up a local install of Solr 6.6.0 with a toy data set and tested 
multi-term EFI arguments using single quotes and it worked as expected. The 
issue seems to be isolated to older Solr versions. We'll upgrade our 
development version and see if that fixes it.-


was (Author: malcorn_redhat):
I just set up a local install of Solr 6.6.0 with a toy data set and tested 
multi-term EFI arguments using single quotes and it worked as expected. The 
issue seems to be isolated to older Solr versions. We'll upgrade our 
development version and see if that fixes it.

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16176708#comment-16176708
 ] 

Michael A. Alcorn edited comment on SOLR-11386 at 10/16/17 5:12 PM:


[~alessandro.benedetti] - -yes.- actually, this is not what I want, I 
apologize. I want a multi-term query where the order of the tokens does not 
influence the score.


was (Author: malcorn_redhat):
[~alessandro.benedetti] - yes.

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206223#comment-16206223
 ] 

Michael A. Alcorn commented on SOLR-11386:
--

I should also clarify that I do not want a phrase query, i.e., the order of the 
tokens should not matter (I'm still learning Solr jargon).

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.6-Windows (64bit/jdk1.8.0_144) - Build # 66 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Windows/66/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "overlay":{ "znodeVersion":0, 
"runtimeLib":{"colltest":{ "name":"colltest", "version":1,  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":0,
"runtimeLib":{"colltest":{
"name":"colltest",
"version":1,  from server:  null
at 
__randomizedtesting.SeedInfo.seed([6471B7F9EE907D3A:BC3C9AAE194DD89A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evalu

[jira] [Updated] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael A. Alcorn updated SOLR-11386:
-
Attachment: solr_efi_examples.zip

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael A. Alcorn updated SOLR-11386:
-
Attachment: (was: solr_efi_examples.zip)

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206197#comment-16206197
 ] 

Michael A. Alcorn edited comment on SOLR-11386 at 10/16/17 5:04 PM:


[~alessandro.benedetti] - [the attached files|^solr_efi_examples.zip] provide a 
minimum working example demonstrating the unexpected behavior. 


was (Author: malcorn_redhat):
[~alessandro.benedetti] - the attached files provide a minimum working example 
demonstrating the unexpected behavior. 

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-10-16 Thread Michael A. Alcorn (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael A. Alcorn updated SOLR-11386:
-
Attachment: solr_efi_examples.zip

[~alessandro.benedetti] - the attached files provide a minimum working example 
demonstrating the unexpected behavior. 

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7994) Use int/int hash map for int taxonomy facet counts

2017-10-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206176#comment-16206176
 ] 

Dawid Weiss commented on LUCENE-7994:
-

The key value rehash function is pretty simplistic in those implementations. 
I've had bad experiences with collisions on such trivial functions in real life 
(in HPPC); these can vary from slow-downs to actual practical deadlocks (not to 
mention intentional adversaries) [1].

The current implementation in HPPC uses a different key mixing strategy [2], 
combined with a unique per-instance seed to minimize the practical impact of 
such clashes. The performance cost is there, but it's not huge... something to 
consider?

[1] http://issues.carrot2.org/browse/HPPC-80 
http://issues.carrot2.org/browse/HPPC-103
[2] 
https://github.com/carrotsearch/hppc/blob/master/hppc/src/main/java/com/carrotsearch/hppc/BitMixer.java


> Use int/int hash map for int taxonomy facet counts
> --
>
> Key: LUCENE-7994
> URL: https://issues.apache.org/jira/browse/LUCENE-7994
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-7994.patch
>
>
> Int taxonomy facets today always count into a dense {{int[]}}, which is 
> wasteful in cases where the number of unique facet labels is high and the 
> size of the current result set is small.
> I factored the native hash map from LUCENE-7927 and use a simple heuristic 
> (customizable by the user by subclassing) to decide up front whether to count 
> sparse or dense.  I also made loading of the large children and siblings 
> {{int[]}} lazy, so that they are only instantiated if you really need them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.2 RC1

2017-10-16 Thread Tommaso Teofili
+1

SUCCESS! [3:44:31.153413]

Il giorno lun 16 ott 2017 alle ore 17:41 Steve Rowe  ha
scritto:

> +1
>
> Docs, changes and javadocs look good.
>
> Smoke tester says: SUCCESS! [0:27:54.146682]
>
> --
> Steve
> www.lucidworks.com
>
> > On Oct 15, 2017, at 3:01 PM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
> >
> > Please vote for release candidate 1 for Lucene/Solr 6.6.2
> >
> >
> > The artifacts can be downloaded from:
> >
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.2-RC1-revdf4de29b55369876769bb741d687e47b67ff9613
> >
> > You can run the smoke tester directly with this command:
> >
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> >
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.2-RC1-revdf4de29b55369876769bb741d687e47b67ff9613
> >
> > Here's my +1
> >
> > SUCCESS! [0:29:21.090759 <090%20759>]
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-9562) Minimize queried collections for time series alias

2017-10-16 Thread Radu Gheorghe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206114#comment-16206114
 ] 

Radu Gheorghe commented on SOLR-9562:
-

I didn't know about the transient cores, it sounds like a cool concept. I think 
it would be a great fit for the time-series use-case, because:
* normally you'd only write to the latest collection. Maybe we could even have 
a configurable limit on how far back we could backfill (which is reasonable for 
most use-cases)
* normally you wouldn't have many replicas anyway, or maybe we can configure 
how many replicas to load based on X metrics? This sounds like a case for 
AutoScaling again :)

> Minimize queried collections for time series alias
> --
>
> Key: SOLR-9562
> URL: https://issues.apache.org/jira/browse/SOLR-9562
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Eungsop Yoo
>Priority: Minor
> Attachments: SOLR-9562-v2.patch, SOLR-9562.patch
>
>
> For indexing time series data(such as large log data), we can create a new 
> collection regularly(hourly, daily, etc.) with a write alias and create a 
> read alias for all of those collections. But all of the collections of the 
> read alias are queried even if we search over very narrow time window. In 
> this case, the docs to be queried may be stored in very small portion of 
> collections. So we don't need to do that.
> I suggest this patch for read alias to minimize queried collections. Three 
> parameters for CREATEALIAS action are added.
> || Key || Type || Required || Default || Description ||
> | timeField | string | No | | The time field name for time series data. It 
> should be date type. |
> | dateTimeFormat | string | No | | The format of timestamp for collection 
> creation. Every collection should has a suffix(start with "_") with this 
> format. 
> Ex. dateTimeFormat: MMdd, collectionName: col_20160927
> See 
> [DateTimeFormatter|https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html].
>  |
> | timeZone | string | No | | The time zone information for dateTimeFormat 
> parameter.
> Ex. GMT+9. 
> See 
> [DateTimeFormatter|https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html].
>  |
> And then when we query with filter query like this "timeField:\[fromTime TO 
> toTime\]", only the collections have the docs for a given time range will be 
> queried.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2017-10-16 Thread Radu Gheorghe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radu Gheorghe updated SOLR-11473:
-
Attachment: SOLR-11473.patch

Thanks, [~thetaphi], [~mdrob] and [~thelabdude] for your comments. I tried 
specifying HdfsUpdateLog for the transaction log - indeed that helps storing 
the transaction log in Alluxio, but not with this path issue.

I'm attaching a patch that would fix this, with a few comments:
* I've only really changed the isAbsolute() method. I looked around in 
HDFSDirectoryFactory and I didn't find other places where it would be useful. 
Maybe I'm wrong, maybe this could be refactored to look nicer, but I thought 
starting with a minimal patch would be better :)
* while I tested this patch with Alluxio and it worked well, I didn't add any 
unit tests. I thought it would basically be testing the URI class, which seemed 
pointless, but on the other hand I thought it would be nice to make sure we 
don't lose this functionality in the future (i.e. to make non-hdfs:/ paths 
work). Let me know if that's needed, though, or if you have any suggestions on 
what should be tested (I'm thinking of an hdfs:/ path, an alluxio:/ path and a 
relative path)
* this was all tested with Solr 6.6.1, and I've based my changes off the 6_6 
branch from GitHub. I didn't add anything to CHANGES.txt because it's unclear 
to me where this change would go. Or if a CHANGES.txt modification should be 
added at this stage, without knowing the Fix Version. Also, `git format-patch` 
misbehaved for me, so I've generated this through `git diff`. Is that OK?

Besides the last question, do you have any other thoughts or questions?

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Priority: Minor
> Attachments: SOLR-11473.patch
>
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.6 - Build # 41 - Unstable

2017-10-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.6/41/

11 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1200>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1200>
at 
__randomizedtesting.SeedInfo.seed([275D4C2EB5E82ED0:F318077752BE9D2B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CloudExitableDirectoryReaderTest

Error Message

Re: [VOTE] Release Lucene/Solr 6.6.2 RC1

2017-10-16 Thread Steve Rowe
+1

Docs, changes and javadocs look good.

Smoke tester says: SUCCESS! [0:27:54.146682]

--
Steve
www.lucidworks.com

> On Oct 15, 2017, at 3:01 PM, Ishan Chattopadhyaya  
> wrote:
> 
> Please vote for release candidate 1 for Lucene/Solr 6.6.2
> 
> 
> The artifacts can be downloaded from:
> 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.2-RC1-revdf4de29b55369876769bb741d687e47b67ff9613
> 
> You can run the smoke tester directly with this command:
> 
> 
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.2-RC1-revdf4de29b55369876769bb741d687e47b67ff9613
> 
> Here's my +1
> 
> SUCCESS! [0:29:21.090759]
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9) - Build # 6962 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6962/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC --illegal-access=deny

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at http://127.0.0.1:58752/solr/awhollynewcollection_0: 
{"awhollynewcollection_0":7}

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58752/solr/awhollynewcollection_0: 
{"awhollynewcollection_0":7}
at 
__randomizedtesting.SeedInfo.seed([EF42671D161B05DB:A73713A910282A4E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:626)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:460)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtest

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9) - Build # 20683 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20683/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC --illegal-access=deny

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
8 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 1) 
Thread[id=13834, name=zkCallback-2641-thread-4, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1091)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=13810, name=zkCallback-2641-thread-3, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1091)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)3) 
Thread[id=13872, name=zkCallback-2641-thread-5, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1091)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)4) 
Thread[id=13637, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9/java.lang.Thread.run(Thread.java:844)5) 
Thread[id=13809, name=zkCallback-2641-thread-2, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1091)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)6) 
Thread[id=13639, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[B2A3EBD7BB0367EB]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
 at java.base@9/jdk.internal.misc.Unsafe.park(Native M

[jira] [Commented] (LUCENE-7994) Use int/int hash map for int taxonomy facet counts

2017-10-16 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206059#comment-16206059
 ] 

Michael Braun commented on LUCENE-7994:
---

[~mikemccand] Issues like [LUCENE-7525] would benefit from a int-int hashmap - 
is it possible for it to be added to something more common rather than the 
facets module?

> Use int/int hash map for int taxonomy facet counts
> --
>
> Key: LUCENE-7994
> URL: https://issues.apache.org/jira/browse/LUCENE-7994
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-7994.patch
>
>
> Int taxonomy facets today always count into a dense {{int[]}}, which is 
> wasteful in cases where the number of unique facet labels is high and the 
> size of the current result set is small.
> I factored the native hash map from LUCENE-7927 and use a simple heuristic 
> (customizable by the user by subclassing) to decide up front whether to count 
> sparse or dense.  I also made loading of the large children and siblings 
> {{int[]}} lazy, so that they are only instantiated if you really need them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: pom.xml.template schemaLocation

2017-10-16 Thread Steve Rowe
I guess canonical is overstating it - here’s the one from both the POM 
reference  and the 3.5.0 Maven Model 
:  •   

> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
> http://maven.apache.org/xsd/maven-4.0.0.xsd";

+1 from me to change our pom.xml.template-s.  (I don’t know whether “http://” 
or “https://” is better.)

--
Steve
www.lucidworks.com

> On Oct 16, 2017, at 10:21 AM, Steve Rowe  wrote:
> 
> Hi Christine,
> 
>> On Oct 16, 2017, at 9:00 AM, Christine Poerschke (BLOOMBERG/ LONDON) 
>>  wrote:
>> 
>> I noticed that we have
>> 
>> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
>> http://maven.apache.org/maven-v4_0_0.xsd";
>> 
>> currently and that doesn't seem to exist at present.
> 
> wget against the .xsd link says:
> 
>> HTTP request sent, awaiting response... 301 Moved Permanently
>> Location: http://maven.apache.org/xsd/maven-4.0.0.xsd [following]
> 
> But: I ran ‘mvn archetype:generate’ using Maven 3.5.0 and got the exact same 
> xsi:schemaLocation as we have in our pom.xml.template files:
> 
>> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
>> http://maven.apache.org/maven-v4_0_0.xsd
> 
> I guess one could say this is canonical?
> 
> --
> Steve
> www.lucidworks.com
> 
> 
>> There's a https://maven.apache.org/xsd/maven-v3_0_0.xsd and
>>   a https://maven.apache.org/xsd/maven-4.0.0.xsd
>> linked on http://maven.apache.org/ref/3.5.0/maven-model/maven.html however.
>> 
>> Can I change our schemaLocation in the pom.xml.template files to 
>> https://maven.apache.org/xsd/maven-4.0.0.xsd or is some other change needed?
>> 
>> Thanks,
>> Christine
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: pom.xml.template schemaLocation

2017-10-16 Thread Steve Rowe
Hi Christine,

> On Oct 16, 2017, at 9:00 AM, Christine Poerschke (BLOOMBERG/ LONDON) 
>  wrote:
> 
> I noticed that we have
> 
> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/maven-v4_0_0.xsd";
> 
> currently and that doesn't seem to exist at present.

wget against the .xsd link says:

> HTTP request sent, awaiting response... 301 Moved Permanently
> Location: http://maven.apache.org/xsd/maven-4.0.0.xsd [following]

But: I ran ‘mvn archetype:generate’ using Maven 3.5.0 and got the exact same 
xsi:schemaLocation as we have in our pom.xml.template files:

> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/maven-v4_0_0.xsd

I guess one could say this is canonical?

--
Steve
www.lucidworks.com


> There's a https://maven.apache.org/xsd/maven-v3_0_0.xsd and
>a https://maven.apache.org/xsd/maven-4.0.0.xsd
> linked on http://maven.apache.org/ref/3.5.0/maven-model/maven.html however.
> 
> Can I change our schemaLocation in the pom.xml.template files to 
> https://maven.apache.org/xsd/maven-4.0.0.xsd or is some other change needed?
> 
> Thanks,
> Christine


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11494) Expected mime type application/octet-stream but got text/html

2017-10-16 Thread khawaja MUHAMMAD Shoaib (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205963#comment-16205963
 ] 

khawaja MUHAMMAD Shoaib commented on SOLR-11494:


actually i need to find out why solr is throughing this exception. Because as 
you see exception starts from  
Error from server at http://127.0.0.1:8983/solr: Expected mime type 
application/octet-stream but got text/html. 

as someone has pointed out on stackover flow to enable setting on solr

https://stackoverflow.com/questions/24089769/solr-realtime-get-remotesolrexception-expected-mime-type-application-xml-but-go

if have already change setting of solr but nothing changes


> Expected mime type application/octet-stream but got text/html
> -
>
> Key: SOLR-11494
> URL: https://issues.apache.org/jira/browse/SOLR-11494
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI, SolrJ
>Affects Versions: 6.5
> Environment: Windows 10
> Java jdk1.8.0_144
> Solr 6.5.0
> Spring Data Solr 2.0.5.RELEASE
> Spring Version 4.3.12.RELEASE
>Reporter: khawaja MUHAMMAD Shoaib
> Attachments: MerchantModel.java, MerchantRepository.java, 
> SolrConfig.java
>
>
> I have been following tutorial from below link to implement Spring data Solr 
> http://www.baeldung.com/spring-data-solr
> Attached is my config file, model and repository for spring data solr.
> when i make any query or save my model i receive the below exception.
> my solr is working fine when i ping from browser " 
> http://127.0.0.1:8983/solr/";
> {code:java}
>  MerchantModel model = new MerchantModel();
> model.setId("2");
> model.setLocation("31.5287,74.4121");
> model.setTitle("khawaja");
> merchantRepository.save(model);
> {code}
>  
> upon save i am getting the below exception 
> ###
> org.springframework.data.solr.UncategorizedSolrException: Error from server 
> at http://127.0.0.1:8983/solr: Expected mime type application/octet-stream 
> but got text/html. 
> 
> 
> Error 404 Not Found
> 
> HTTP ERROR 404
> Problem accessing /solr/update. Reason:
> Not Found
> 
> 
> ; nested exception is 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 404 Not Found
> 
> HTTP ERROR 404
> Problem accessing /solr/update. Reason:
> Not Found
> 
> 
> ###



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 245 - Still Unstable!

2017-10-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/245/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=33608, name=jetty-launcher-5069-thread-1-SendThread(127.0.0.1:35405), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
2) Thread[id=33609, name=jetty-launcher-5069-thread-1-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)
3) Thread[id=33548, name=jetty-launcher-5069-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=33608, 
name=jetty-launcher-5069-thread-1-SendThread(127.0.0.1:35405), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
   2) Thread[id=33609, name=jetty-launcher-5069-thread-1-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)
   3) Thread[id=33548, name=jetty-launcher-5069-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at

[jira] [Resolved] (SOLR-11494) Expected mime type application/octet-stream but got text/html

2017-10-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11494.
---
Resolution: Later

Please raise this question on the user's list at solr-u...@lucene.apache.org, 
see: (http://lucene.apache.org/solr/community.html#mailing-lists-irc) there are 
a _lot_ more people watching that list who may be able to help. 

If it's determined that this really is a code issue in Solr and not a 
configuration/usage problem, we can raise a new JIRA or reopen this one.

As this is a Spring-specific question, perhaps the Spring user's lists would be 
useful as well.

> Expected mime type application/octet-stream but got text/html
> -
>
> Key: SOLR-11494
> URL: https://issues.apache.org/jira/browse/SOLR-11494
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI, SolrJ
>Affects Versions: 6.5
> Environment: Windows 10
> Java jdk1.8.0_144
> Solr 6.5.0
> Spring Data Solr 2.0.5.RELEASE
> Spring Version 4.3.12.RELEASE
>Reporter: khawaja MUHAMMAD Shoaib
> Attachments: MerchantModel.java, MerchantRepository.java, 
> SolrConfig.java
>
>
> I have been following tutorial from below link to implement Spring data Solr 
> http://www.baeldung.com/spring-data-solr
> Attached is my config file, model and repository for spring data solr.
> when i make any query or save my model i receive the below exception.
> my solr is working fine when i ping from browser " 
> http://127.0.0.1:8983/solr/";
> {code:java}
>  MerchantModel model = new MerchantModel();
> model.setId("2");
> model.setLocation("31.5287,74.4121");
> model.setTitle("khawaja");
> merchantRepository.save(model);
> {code}
>  
> upon save i am getting the below exception 
> ###
> org.springframework.data.solr.UncategorizedSolrException: Error from server 
> at http://127.0.0.1:8983/solr: Expected mime type application/octet-stream 
> but got text/html. 
> 
> 
> Error 404 Not Found
> 
> HTTP ERROR 404
> Problem accessing /solr/update. Reason:
> Not Found
> 
> 
> ; nested exception is 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 404 Not Found
> 
> HTTP ERROR 404
> Problem accessing /solr/update. Reason:
> Not Found
> 
> 
> ###



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9562) Minimize queried collections for time series alias

2017-10-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205928#comment-16205928
 ] 

Erick Erickson commented on SOLR-9562:
--

Radu:

bq: That said, loading/unloading shards might help reduce the overhead of many 
shards, assuming that old data is rarely touched

In stand-alone mode there's the whole "transient core" concept, essentially 
Solr cores are cached in a size-limited cache. When an operation is performed 
on a core, if it's not already in the cache it's loaded on the fly and if 
loading it goes over the cache size, the least-recently-used core is unloaded. 
This is all totally automatic, the only thing the user has to configure is the 
size of the cache.

This has _not_ been worked through with SolrCloud, all the decisions are made 
locally. The problems I foresee in the general SolrCloud case mainly have to do 
with thrashing when, say, updates are distributed... all the replicas for all 
the shards receiving updates would have to be loaded. There'd need to be some 
kind of way to re-use replicas in a shard for queries until traffic exceeded 
some limit (why should Solr reload 10 replicas for a shard for 10 different 
queries if the QPS rate was 10/minute?). Perhaps some of the new metrics could 
be used for that case

Anyway, the transient core stuff was never envisioned with SolrCloud in mind, 
but it might be useful in this case.

> Minimize queried collections for time series alias
> --
>
> Key: SOLR-9562
> URL: https://issues.apache.org/jira/browse/SOLR-9562
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Eungsop Yoo
>Priority: Minor
> Attachments: SOLR-9562-v2.patch, SOLR-9562.patch
>
>
> For indexing time series data(such as large log data), we can create a new 
> collection regularly(hourly, daily, etc.) with a write alias and create a 
> read alias for all of those collections. But all of the collections of the 
> read alias are queried even if we search over very narrow time window. In 
> this case, the docs to be queried may be stored in very small portion of 
> collections. So we don't need to do that.
> I suggest this patch for read alias to minimize queried collections. Three 
> parameters for CREATEALIAS action are added.
> || Key || Type || Required || Default || Description ||
> | timeField | string | No | | The time field name for time series data. It 
> should be date type. |
> | dateTimeFormat | string | No | | The format of timestamp for collection 
> creation. Every collection should has a suffix(start with "_") with this 
> format. 
> Ex. dateTimeFormat: MMdd, collectionName: col_20160927
> See 
> [DateTimeFormatter|https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html].
>  |
> | timeZone | string | No | | The time zone information for dateTimeFormat 
> parameter.
> Ex. GMT+9. 
> See 
> [DateTimeFormatter|https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html].
>  |
> And then when we query with filter query like this "timeField:\[fromTime TO 
> toTime\]", only the collections have the docs for a given time range will be 
> queried.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11359) An autoscaling/suggestions endpoint to recommend operations

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205891#comment-16205891
 ] 

ASF subversion and git services commented on SOLR-11359:


Commit fb97ff1400aada2169797536dad640edd76c71ab in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fb97ff1 ]

SOLR-11359: Refactored


> An autoscaling/suggestions endpoint to recommend operations
> ---
>
> Key: SOLR-11359
> URL: https://issues.apache.org/jira/browse/SOLR-11359
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> Autoscaling can make suggestions to users on what operations they can perform 
> to improve the health of the cluster
> The suggestions will have the following information
> * http end point
> * http method (POST,DELETE)
> * command payload



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11359) An autoscaling/suggestions endpoint to recommend operations

2017-10-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205881#comment-16205881
 ] 

ASF subversion and git services commented on SOLR-11359:


Commit 141b08a40fbc95d01a6c75fc4c063da556c4f649 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=141b08a ]

SOLR-11359: Refactored


> An autoscaling/suggestions endpoint to recommend operations
> ---
>
> Key: SOLR-11359
> URL: https://issues.apache.org/jira/browse/SOLR-11359
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> Autoscaling can make suggestions to users on what operations they can perform 
> to improve the health of the cluster
> The suggestions will have the following information
> * http end point
> * http method (POST,DELETE)
> * command payload



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9562) Minimize queried collections for time series alias

2017-10-16 Thread Radu Gheorghe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205860#comment-16205860
 ] 

Radu Gheorghe commented on SOLR-9562:
-

My two cents:
* if data is relatively low-velocity, merging shards of an existing collection 
that's already "done" (e.g. yesterday's collection) by the way of a pure merge 
should help with scaling the cluster. Here's how Elasticsearch does it: 
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-shrink-index.html
* if data is high-velocity, one will likely have to to live with the trade-off 
between many collections (i.e. rotate them more frequently, which would then 
write faster because of less merging and read faster because they are "done" 
faster => better caching for those "wrapped up" collections) or less 
collections (which imply less shards). I'm saying this because the benefits of 
merging shards may not be worth the overhead

That said, loading/unloading shards might help reduce the overhead of many 
shards, assuming that old data is rarely touched. I'm probably getting way 
ahead of myself here, but a read alias that would automatically load shards 
(that would be closed from a cronjob looking at activity) would be pretty 
awesome (especially if we think about them in the context of AutoScaling and 
shared file systems).

> Minimize queried collections for time series alias
> --
>
> Key: SOLR-9562
> URL: https://issues.apache.org/jira/browse/SOLR-9562
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Eungsop Yoo
>Priority: Minor
> Attachments: SOLR-9562-v2.patch, SOLR-9562.patch
>
>
> For indexing time series data(such as large log data), we can create a new 
> collection regularly(hourly, daily, etc.) with a write alias and create a 
> read alias for all of those collections. But all of the collections of the 
> read alias are queried even if we search over very narrow time window. In 
> this case, the docs to be queried may be stored in very small portion of 
> collections. So we don't need to do that.
> I suggest this patch for read alias to minimize queried collections. Three 
> parameters for CREATEALIAS action are added.
> || Key || Type || Required || Default || Description ||
> | timeField | string | No | | The time field name for time series data. It 
> should be date type. |
> | dateTimeFormat | string | No | | The format of timestamp for collection 
> creation. Every collection should has a suffix(start with "_") with this 
> format. 
> Ex. dateTimeFormat: MMdd, collectionName: col_20160927
> See 
> [DateTimeFormatter|https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html].
>  |
> | timeZone | string | No | | The time zone information for dateTimeFormat 
> parameter.
> Ex. GMT+9. 
> See 
> [DateTimeFormatter|https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html].
>  |
> And then when we query with filter query like this "timeField:\[fromTime TO 
> toTime\]", only the collections have the docs for a given time range will be 
> queried.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



pom.xml.template schemaLocation

2017-10-16 Thread Christine Poerschke (BLOOMBERG/ LONDON)
I noticed that we have

xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";

currently and that doesn't seem to exist at present.

There's a https://maven.apache.org/xsd/maven-v3_0_0.xsd and
a https://maven.apache.org/xsd/maven-4.0.0.xsd
linked on http://maven.apache.org/ref/3.5.0/maven-model/maven.html however.

Can I change our schemaLocation in the pom.xml.template files to 
https://maven.apache.org/xsd/maven-4.0.0.xsd or is some other change needed?

Thanks,
Christine
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11494) Expected mime type application/octet-stream but got text/html

2017-10-16 Thread khawaja MUHAMMAD Shoaib (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

khawaja MUHAMMAD Shoaib updated SOLR-11494:
---
Attachment: SolrConfig.java
MerchantModel.java
MerchantRepository.java

Solr configuration file " SolrConfig" model and repository

> Expected mime type application/octet-stream but got text/html
> -
>
> Key: SOLR-11494
> URL: https://issues.apache.org/jira/browse/SOLR-11494
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI, SolrJ
>Affects Versions: 6.5
> Environment: Windows 10
> Java jdk1.8.0_144
> Solr 6.5.0
> Spring Data Solr 2.0.5.RELEASE
> Spring Version 4.3.12.RELEASE
>Reporter: khawaja MUHAMMAD Shoaib
> Attachments: MerchantModel.java, MerchantRepository.java, 
> SolrConfig.java
>
>
> I have been following tutorial from below link to implement Spring data Solr 
> http://www.baeldung.com/spring-data-solr
> Attached is my config file, model and repository for spring data solr.
> when i make any query or save my model i receive the below exception.
> my solr is working fine when i ping from browser " 
> http://127.0.0.1:8983/solr/";
> {code:java}
>  MerchantModel model = new MerchantModel();
> model.setId("2");
> model.setLocation("31.5287,74.4121");
> model.setTitle("khawaja");
> merchantRepository.save(model);
> {code}
>  
> upon save i am getting the below exception 
> ###
> org.springframework.data.solr.UncategorizedSolrException: Error from server 
> at http://127.0.0.1:8983/solr: Expected mime type application/octet-stream 
> but got text/html. 
> 
> 
> Error 404 Not Found
> 
> HTTP ERROR 404
> Problem accessing /solr/update. Reason:
> Not Found
> 
> 
> ; nested exception is 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 404 Not Found
> 
> HTTP ERROR 404
> Problem accessing /solr/update. Reason:
> Not Found
> 
> 
> ###



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11494) Expected mime type application/octet-stream but got text/html

2017-10-16 Thread khawaja MUHAMMAD Shoaib (JIRA)
khawaja MUHAMMAD Shoaib created SOLR-11494:
--

 Summary: Expected mime type application/octet-stream but got 
text/html
 Key: SOLR-11494
 URL: https://issues.apache.org/jira/browse/SOLR-11494
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCLI, SolrJ
Affects Versions: 6.5
 Environment: Windows 10
Java jdk1.8.0_144
Solr 6.5.0
Spring Data Solr 2.0.5.RELEASE
Spring Version 4.3.12.RELEASE
Reporter: khawaja MUHAMMAD Shoaib


I have been following tutorial from below link to implement Spring data Solr 
http://www.baeldung.com/spring-data-solr

Attached is my config file, model and repository for spring data solr.

when i make any query or save my model i receive the below exception.
my solr is working fine when i ping from browser " http://127.0.0.1:8983/solr/";

{code:java}
 MerchantModel model = new MerchantModel();
model.setId("2");
model.setLocation("31.5287,74.4121");
model.setTitle("khawaja");
merchantRepository.save(model);
{code}
 
upon save i am getting the below exception 
###
org.springframework.data.solr.UncategorizedSolrException: Error from server at 
http://127.0.0.1:8983/solr: Expected mime type application/octet-stream but got 
text/html. 


Error 404 Not Found

HTTP ERROR 404
Problem accessing /solr/update. Reason:
Not Found


; nested exception is 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:8983/solr: Expected mime type 
application/octet-stream but got text/html. 


Error 404 Not Found

HTTP ERROR 404
Problem accessing /solr/update. Reason:
Not Found


###



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >