JIRA: to use "8.x" or not?

2019-03-15 Thread David Smiley
In JIRA, should we bother with having "version.x" versions or not, and only
have, say 8.1, 8.2, etc.?  I seem to be schizophrenic on this having
thought we did things one way and recently thought the other.  Our release
process wiki page[1] makes no mention of such releases near where JIRA is
discussed AFAICT, and instead I see explicit mention of using the minor
releases numbers.  I'm baffled why I'm on record for saying the inverse
only 6 weeks ago.  In practice, it appears Lucene side uses these
"version.x"[2].  Solr doesn't at the moment only because of some recent
cleanup.

FWIW I think we shouldn't bother but who knows what I'll think in a month
;-P

[1]: https://wiki.apache.org/lucene-java/ReleaseTodo#Add_New_JIRA_Versions
[2]:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20LUCENE%20AND%20fixVersion%20%3D%208.x

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-9.0.4) - Build # 100 - Failure!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/100/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly

Error Message:
Unexpected number of elements in the group for intGSL: 23 rsp: 
{responseHeader={zkConnected=true,status=0,QTime=10,params={q=*:*,_stateVer_=dv_coll:4,group.limit=100,rows=100,wt=javabin,version=2,group.field=intGSL,group=true}},grouped={intGSL={matches=59,groups=[{groupValue=null,doclist={numFound=23,start=0,maxScore=1.0,docs=[SolrDocument{id=8,
 intGSF=1129059394, longGSF=2235493048407907383, doubleGSF=20010.298021521136, 
floatGSF=20003.346, dateGSF=Thu Dec 03 06:32:24 GMT-04:00 217998629, 
stringGSF=base_string_971492__00020009, boolGSF=true, 
sortableGSF=base_string_114676__00020011, _version_=1628126672398057472, 
_root_=8}, SolrDocument{id=10, intGSF=1129059394, longGSF=2235493048407907383, 
doubleGSF=20010.298021521136, floatGSF=20003.346, dateGSF=Thu Dec 03 06:32:24 
GMT-04:00 217998629, stringGSF=base_string_971492__00020009, boolGSF=true, 
sortableGSF=base_string_114676__00020011, _version_=1628126672398057473, 
_root_=10}, SolrDocument{id=11, intGSF=1129059394, longGSF=2235493048407907383, 
doubleGSF=20010.298021521136, floatGSF=20003.346, dateGSF=Thu Dec 03 06:32:24 
GMT-04:00 217998629, stringGSF=base_string_971492__00020009, boolGSF=false, 
sortableGSF=base_string_114676__00020011, _version_=1628126672398057474, 
_root_=11}, SolrDocument{id=13, intGSF=1129059394, longGSF=2235493048407907383, 
doubleGSF=20010.298021521136, floatGSF=20003.346, dateGSF=Thu Dec 03 06:32:24 
GMT-04:00 217998629, stringGSF=base_string_971492__00020009, boolGSF=false, 
sortableGSF=base_string_114676__00020011, _version_=1628126672398057475, 
_root_=13}, SolrDocument{id=14, intGSF=1129069395, longGSF=2235493048407917389, 
doubleGSF=30010.298021521136, floatGSF=30006.346, dateGSF=Thu Dec 03 06:32:34 
GMT-04:00 217998629, stringGSF=base_string_971492__00030011, boolGSF=true, 
sortableGSF=base_string_114676__00030019, _version_=1628126672398057476, 
_root_=14}, SolrDocument{id=10015, _version_=1628126672398057477, 
_root_=10015}, SolrDocument{id=10020, _version_=1628126672398057478, 
_root_=10020}, SolrDocument{id=24, intGSF=1129079404, 
longGSF=2235493048407927396, doubleGSF=40016.298021521136, floatGSF=40009.348, 
dateGSF=Thu Dec 03 06:32:44 GMT-04:00 217998629, 
stringGSF=base_string_971492__00040013, boolGSF=true, 
sortableGSF=base_string_114676__00040028, _version_=1628126672398057479, 
_root_=24}, SolrDocument{id=27, intGSF=1129079404, longGSF=2235493048407927396, 
doubleGSF=40016.298021521136, floatGSF=40009.348, dateGSF=Thu Dec 03 06:32:44 
GMT-04:00 217998629, stringGSF=base_string_971492__00040013, boolGSF=false, 
sortableGSF=base_string_114676__00040028, _version_=1628126672398057480, 
_root_=27}, SolrDocument{id=28, intGSF=1129089404, longGSF=2235493048407937399, 
doubleGSF=50020.298021521136, floatGSF=50016.348, dateGSF=Thu Dec 03 06:32:54 
GMT-04:00 217998629, stringGSF=base_string_971492__00050019, boolGSF=true, 
sortableGSF=base_string_114676__00050033, _version_=1628126672398057481, 
_root_=28}, SolrDocument{id=32, intGSF=1129089404, longGSF=2235493048407937399, 
doubleGSF=50020.298021521136, floatGSF=50016.348, dateGSF=Thu Dec 03 06:32:54 
GMT-04:00 217998629, stringGSF=base_string_971492__00050019, boolGSF=true, 
sortableGSF=base_string_114676__00050033, _version_=1628126672398057482, 
_root_=32}, SolrDocument{id=38, intGSF=1129099407, longGSF=2235493048407947408, 
doubleGSF=60027.298021521136, floatGSF=60024.348, dateGSF=Thu Dec 03 06:33:04 
GMT-04:00 217998629, stringGSF=base_string_971492__00060026, boolGSF=true, 
sortableGSF=base_string_114676__00060036, _version_=1628126672398057483, 
_root_=38}, SolrDocument{id=40, intGSF=1129099407, longGSF=2235493048407947408, 
doubleGSF=60027.298021521136, floatGSF=60024.348, dateGSF=Thu Dec 03 06:33:04 
GMT-04:00 217998629, stringGSF=base_string_971492__00060026, boolGSF=true, 
sortableGSF=base_string_114676__00060036, _version_=1628126672398057484, 
_root_=40}, SolrDocument{id=42, intGSF=1129109407, longGSF=2235493048407957414, 
doubleGSF=70032.29802152113, floatGSF=70026.34, dateGSF=Thu Dec 03 06:33:14 
GMT-04:00 217998629, stringGSF=base_string_971492__00070034, boolGSF=true, 
sortableGSF=base_string_114676__00070043, _version_=1628126672398057485, 
_root_=42}, SolrDocument{id=48, intGSF=1129109407, longGSF=2235493048407957414, 
doubleGSF=70032.29802152113, floatGSF=70026.34, dateGSF=Thu Dec 03 06:33:14 
GMT-04:00 217998629, stringGSF=base_string_971492__00070034, boolGSF=true, 
sortableGSF=base_string_114676__00070043, _version_=1628126672398057486, 
_root_=48}, SolrDocument{id=10005, _version_=1628126672654958593, 
_root_=10005}, SolrDocument{id=10010, _version_=1628126672660201476, 
_root_=10010}, SolrDocument{id=10025, _version_=1628126672660201478, 
_root_=10025}, SolrDocument{id=10035, _version_=1628126672661250050,

[JENKINS] Lucene-Solr-Tests-master - Build # 3214 - Failure

2019-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3214/

1 tests failed.
FAILED:  
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testSliceRouting

Error Message:
Timeout occurred while waiting response from server at: 
http://127.0.0.1:32856/solr/myAlias__CRA__Heart_of_Gold_shard3_replica_n14

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Timeout 
occurred while waiting response from server at: 
http://127.0.0.1:32856/solr/myAlias__CRA__Heart_of_Gold_shard3_replica_n14
at 
__randomizedtesting.SeedInfo.seed([5477A89404F0F38E:65E72478A0A60CA7]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getRouteException(CloudSolrClient.java:125)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getRouteException(CloudSolrClient.java:46)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.directUpdate(BaseCloudSolrClient.java:485)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:964)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testSliceRouting(CategoryRoutedAliasUpdateProcessorTest.java:366)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rul

[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 54 - Still Unstable

2019-03-15 Thread Apache Jenkins Server
Build: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/54/

6 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Failed while waiting for active collection Timeout waiting to see state for 
collection=awhollynewcollection_0 
:DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/5)={
   "pullReplicas":"0",   "replicationFactor":"4",   "shards":{ "shard1":{   
"range":"8000-b332",   "state":"active",   "replicas":{ 
"core_node2":{   "core":"awhollynewcollection_0_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:34042/solr";,   
"node_name":"127.0.0.1:34042_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"awhollynewcollection_0_shard1_replica_n3",   
"base_url":"http://127.0.0.1:44384/solr";,   
"node_name":"127.0.0.1:44384_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node6":{  
 "core":"awhollynewcollection_0_shard1_replica_n5",   
"base_url":"http://127.0.0.1:43110/solr";,   
"node_name":"127.0.0.1:43110_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node9":{  
 "core":"awhollynewcollection_0_shard1_replica_n7",   
"base_url":"http://127.0.0.1:35026/solr";,   
"node_name":"127.0.0.1:35026_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}}}, "shard2":{   
"range":"b333-e665",   "state":"active",   "replicas":{ 
"core_node11":{   "core":"awhollynewcollection_0_shard2_replica_n8",
   "base_url":"http://127.0.0.1:34042/solr";,   
"node_name":"127.0.0.1:34042_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node13":{ 
  "core":"awhollynewcollection_0_shard2_replica_n10",   
"base_url":"http://127.0.0.1:44384/solr";,   
"node_name":"127.0.0.1:44384_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node15":{ 
  "core":"awhollynewcollection_0_shard2_replica_n12",   
"base_url":"http://127.0.0.1:43110/solr";,   
"node_name":"127.0.0.1:43110_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node17":{ 
  "core":"awhollynewcollection_0_shard2_replica_n14",   
"base_url":"http://127.0.0.1:35026/solr";,   
"node_name":"127.0.0.1:35026_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}}}, "shard3":{   
"range":"e666-1998",   "state":"active",   "replicas":{ 
"core_node19":{   "core":"awhollynewcollection_0_shard3_replica_n16",   
"base_url":"http://127.0.0.1:34042/solr";,   
"node_name":"127.0.0.1:34042_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node21":{ 
  "core":"awhollynewcollection_0_shard3_replica_n18",   
"base_url":"http://127.0.0.1:44384/solr";,   
"node_name":"127.0.0.1:44384_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node23":{ 
  "core":"awhollynewcollection_0_shard3_replica_n20",   
"base_url":"http://127.0.0.1:43110/solr";,   
"node_name":"127.0.0.1:43110_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node25":{ 
  "core":"awhollynewcollection_0_shard3_replica_n22",   
"base_url":"http://127.0.0.1:35026/solr";,   
"node_name":"127.0.0.1:35026_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}}}, "shard4":{   
"range":"1999-4ccb",   "state":"active",   "replicas":{ 
"core_node27":{   "core":"awhollynewcollection_0_shard4_replica_n24",   
"base_url":"http://127.0.0.1:34042/solr";,   
"node_name":"127.0.0.1:34042_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node29":{ 
  "core":"awhollynewcollection_0_shard4_replica_n26",   
"base_url":"http://127.0.0.1:44384/solr";,   
"node_name":"127.0.0.1:44384_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node31":{ 
  "core":"awhollynewcollection_0_shard4_replica_n28",   
"base_url":"http://127.0.0.1:43110/solr";,   
"node_name":"127.0.0.1:43110_solr",   "state":"down",   
"type":"NRT",   "force_set_sta

Re: Lucene/Solr 8.0

2019-03-15 Thread David Smiley
RE https://wiki.apache.org/lucene-java/ReleaseTodo#Produce_Release_Notes

I'm not sure if it'd make that much difference but I'd like to move the
release notes step down a little to follow the creation of the release
branch since that's when the features are truly frozen.  Cool?

Jan: Yeah totally agree RE quality varies.  I think the release highlights
is a fundamental editorial task requiring someone looking at the entirety
of the issues, with plenty of judgement calls, to decide what's worth
mentioning. That "releasedocmaker" tool looks cool for generating a
CHANGELOG.md, but I don't think it'd be that great for the release
highlights.  Well it might be okay but the results would simply be "a
start" instead of starting with a blank slate each release.  Often times
the biggest things that happen in a release are comprised of multiple
issues, not one; yet "releasedocmaker" is a per-issue thing.

Even though the release announcement has been published, it's never too
late to retroactively edit the information published to Solr's website!  To
that end, I will edit the wiki version after sending this email to add an
item about enhanced nested document support.  I think more should be said
about HTTP/2 by someone following it closely, and in particular mention
that work continues to 8.1 on it (and beyond?).  Please mention what value
this brings.  These two items are the big ones IMO but others may have more
to add.
https://wiki.apache.org/solr/ReleaseNote80
I will take care to re-publish it to the website next week.

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Fri, Mar 15, 2019 at 4:20 AM Jan Høydahl  wrote:

> The varying quality of release notes has been a problem for a long time.
> Sometimes random unimportant features are highlighted and the list gets
> way too long,
> and this time it was way too short.
>
> I think another alternative is to get some help from JIRA and Yetus here,
> by enabling the
> "release notes" field in JIRA and start using
> https://yetus.apache.org/documentation/0.9.0/releasedocmaker/
>
> Have not tried it but I think it is in use by other projects. There would
> of course need to be
> some guidelines for when to use the field and not, but at least most of
> the work would
> be done by developers when resolving an important JIRA, not by RM at
> release time.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 14. mar. 2019 kl. 18:44 skrev Adrien Grand :
>
> +1 David The highlights section is embarrassing indeed, we should call
> for action earlier in the future like the ReleaseTodo on the wiki
> suggests[1].
> I don't think it is not the only problem though. In the couple
> releases that I managed, I felt like the production of release notes
> was one the most unpleasant parts of the process due to the fact that
> not many people tend to help. It would be nice if we could figure out
> a way to encourage collaboration of more committers on the production
> of release notes. Or maybe we should stop doing this at release time,
> and use the same approach as MIGRATE.txt and ask contributors to
> document highlights at the same time as they push a change that is
> worth highlighting?
>
> [1] https://wiki.apache.org/lucene-java/ReleaseTodo#Produce_Release_Notes
>
>
> On Thu, Mar 14, 2019 at 2:34 PM David Smiley 
> wrote:
>
>
> The Solr highlights section of the announcement is severely incomplete as
> to appear embarrassing.
> In the absence of time/effort to fix it should have simply been omitted;
> the CHANGES.txt has details.
> That would not have been embarrassing.
> Maybe next time we could have a call to action about the release
> highlights that coincides with the creation of the release branch;
> that is a juncture in which the features are frozen and there's plenty of
> time to update.
> Last night I saw the call to action but it was woefully too late for me to
> help.
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Wed, Mar 13, 2019 at 10:02 AM Adrien Grand  wrote:
>
>
> I organized existing items of the Lucene release notes into sections
> and added a new item about FeatureField,
> LongPoint#newDistanceFeatureQuery and
> LatLonPoint#newDistanceFeatureQuery.
>
> On Tue, Mar 12, 2019 at 5:54 PM Alan Woodward 
> wrote:
>
>
> Jim and I have created wiki pages for the 8.0 release highlights here:
> https://wiki.apache.org/solr/ReleaseNote80
> https://wiki.apache.org/lucene-java/ReleaseNote80
>
> Feel free to edit and improve them - the Solr one in particular could do
> with some beefing up.
>
>
> On 20 Feb 2019, at 11:37, Noble Paul  wrote:
>
> I'm committing them,
> Thanks Ishan
>
> On Wed, Feb 20, 2019 at 8:38 PM Alan Woodward 
> wrote:
>
>
> Awesome, thank you Ishan!
>
> On 20 Feb 2019, at 09:15, Ishan Chattopadhyaya 
> wrote:
>
> Would anyone like to volunteer to

[JENKINS] Lucene-Solr-NightlyTests-8.0 - Build # 21 - Still unstable

2019-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.0/21/

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest

Error Message:
ObjectTracker found 3 object(s) that were not released!!! [MMapDirectory, 
InternalHttpClient, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:503)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:424) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1184)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:321)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:330)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:225)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:267)  at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:420) 
 at org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:237) 
 at 
org.apache.solr.cloud.RecoveryStrategy.doReplicateOnlyRecovery(RecoveryStrategy.java:382)
  at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:328)  
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:307)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1056)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:876)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:164)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:15

[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+8) - Build # 269 - Unstable!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/269/
Java: 64bit/jdk-13-ea+8 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! [MMapDirectory, 
MMapDirectory, InternalHttpClient, MMapDirectory, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:509)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:351) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:424) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1193)
  at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
 at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:835)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:517)  
at org.apache.solr.core.SolrCore.(SolrCore.java:968)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:744)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:305)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at java.base/java.lang.Thread.run(Thread.java:835)  
org.apache.solr.common.util.ObjectReleaseTracker$Obje

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 23786 - Failure!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23786/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:-UseCompressedOops 
-XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=21710, 
name=testExecutor-7790-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=21710, name=testExecutor-7790-thread-1, 
state=RUNNABLE, group=TGRP-BasicDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42087/_vrj/rz: ADDREPLICA failed to create 
replica
at __randomizedtesting.SeedInfo.seed([6B885EAB8DF0F9FC]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCollectionInOneInstance$1(BasicDistributedZkTest.java:657)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testSliceRouting

Error Message:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: 
http://127.0.0.1:33435/solr/myAlias__CRA__Constructor_shard3_replica_n13

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: 
http://127.0.0.1:33435/solr/myAlias__CRA__Constructor_shard3_replica_n13
at 
__randomizedtesting.SeedInfo.seed([6B885EAB8DF0F9FC:5A18D24729A606D5]:0)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.directUpdate(BaseCloudSolrClient.java:499)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:964)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testSliceRouting(CategoryRoutedAliasUpdateProcessorTest.java:365)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtest

[jira] [Commented] (SOLR-13131) Category Routed Aliases

2019-03-15 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794041#comment-16794041
 ] 

Gus Heck commented on SOLR-13131:
-

Ah I had noticed that, and that adding 1. 2. 3. Made it go away... I wasn't 
aware that . Without a number was an option. Will try to fix soon, but pretty 
busy this weekend.

> Category Routed Aliases
> ---
>
> Key: SOLR-13131
> URL: https://issues.apache.org/jira/browse/SOLR-13131
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: indexingWithCRA.png, indexingwithoutCRA.png, 
> indexintWithoutCRA2.png
>
>
> This ticket is to add a second type of routed alias in addition to the 
> current time routed aliases. The new type of alias will allow data driven 
> creation of collections based on the values of a field and automated 
> organization of these collections under an alias that allows the collections 
> to also be searched as a whole.
> The use case in mind at present is an IOT device type segregation, but I 
> could also see this leading to the ability to direct updates to tenant 
> specific hardware (in cooperation with autoscaling). 
> This ticket also looks forward to (but does not include) the creation of a 
> Dimensionally Routed Alias which would allow organizing time routed data also 
> segregated by device
> Further design details to be added in comments.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-03-15 Thread Scott Blum (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794015#comment-16794015
 ] 

Scott Blum commented on SOLR-13320:
---

Shalin lemme break this down a bit...

Imagine you're restoring a collection from a backup, but you want to be able to 
accept writes while this is in progress.  You start accepting writes (of new 
data) on the new, empty collection, then in the background you want to backfill 
from your backup copy, but you don't want to overwrite anything that has been 
written recently.

Setting "version:-1" on all the incoming, backfill doc is almost what you 
want-- add any documents that don't exist, but don't overwrite any documents 
that do exist.  The problem is that the entire batch gets rejected if even one 
document already exists.  We just want a way to be able to ignore conflicts and 
quietly drop the offending documents rather than rejecting the entire batch.

"ignoreConflicts" might be a better name.

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-03-15 Thread Scott Blum (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794015#comment-16794015
 ] 

Scott Blum edited comment on SOLR-13320 at 3/15/19 11:19 PM:
-

[~shalinmangar] lemme break this down a bit...

Imagine you're restoring a collection from a backup, but you want to be able to 
accept writes while this is in progress.  You start accepting writes (of new 
data) on the new, empty collection, then in the background you want to backfill 
from your backup copy, but you don't want to overwrite anything that has been 
written recently.

Setting "version:-1" on all the incoming, backfill doc is almost what you want: 
add any documents that don't exist, but don't overwrite any documents that do 
exist.  The problem is that the entire batch gets rejected if even one document 
already exists.  We just want a way to be able to ignore conflicts and quietly 
drop the offending documents rather than rejecting the entire batch.

"ignoreConflicts" might be a better name.


was (Author: dragonsinth):
[~shalinmangar] lemme break this down a bit...

Imagine you're restoring a collection from a backup, but you want to be able to 
accept writes while this is in progress.  You start accepting writes (of new 
data) on the new, empty collection, then in the background you want to backfill 
from your backup copy, but you don't want to overwrite anything that has been 
written recently.

Setting "version:-1" on all the incoming, backfill doc is almost what you 
want-- add any documents that don't exist, but don't overwrite any documents 
that do exist.  The problem is that the entire batch gets rejected if even one 
document already exists.  We just want a way to be able to ignore conflicts and 
quietly drop the offending documents rather than rejecting the entire batch.

"ignoreConflicts" might be a better name.

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-03-15 Thread Scott Blum (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794015#comment-16794015
 ] 

Scott Blum edited comment on SOLR-13320 at 3/15/19 11:19 PM:
-

[~shalinmangar] lemme break this down a bit...

Imagine you're restoring a collection from a backup, but you want to be able to 
accept writes while this is in progress.  You start accepting writes (of new 
data) on the new, empty collection, then in the background you want to backfill 
from your backup copy, but you don't want to overwrite anything that has been 
written recently.

Setting "version:-1" on all the incoming, backfill doc is almost what you 
want-- add any documents that don't exist, but don't overwrite any documents 
that do exist.  The problem is that the entire batch gets rejected if even one 
document already exists.  We just want a way to be able to ignore conflicts and 
quietly drop the offending documents rather than rejecting the entire batch.

"ignoreConflicts" might be a better name.


was (Author: dragonsinth):
Shalin lemme break this down a bit...

Imagine you're restoring a collection from a backup, but you want to be able to 
accept writes while this is in progress.  You start accepting writes (of new 
data) on the new, empty collection, then in the background you want to backfill 
from your backup copy, but you don't want to overwrite anything that has been 
written recently.

Setting "version:-1" on all the incoming, backfill doc is almost what you 
want-- add any documents that don't exist, but don't overwrite any documents 
that do exist.  The problem is that the entire batch gets rejected if even one 
document already exists.  We just want a way to be able to ignore conflicts and 
quietly drop the offending documents rather than rejecting the entire batch.

"ignoreConflicts" might be a better name.

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8723) Bad interaction bewteen WordDelimiterGraphFilter, StopFilter and FlattenGraphFilter

2019-03-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-8723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolás Lichtmaier updated LUCENE-8723:
---
Affects Version/s: 8.0

> Bad interaction bewteen WordDelimiterGraphFilter, StopFilter and 
> FlattenGraphFilter
> ---
>
> Key: LUCENE-8723
> URL: https://issues.apache.org/jira/browse/LUCENE-8723
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 7.7.1, 8.0
>Reporter: Nicolás Lichtmaier
>Priority: Major
>
> I was debugging an issue (missing tokens after analysis) and when I enabled 
> Java assertions I uncovered a bug when using WordDelimiterGraphFilter + 
> StopFilter + FlattenGraphFilter.
> I could reproduce the issue in a small piece of code. This code gives an 
> assertion failure when assertions are enabled (-ea java option):
> {code:java}
>     Builder builder = CustomAnalyzer.builder();
>     builder.withTokenizer(StandardTokenizerFactory.class);
>     builder.addTokenFilter(WordDelimiterGraphFilterFactory.class, 
> "preserveOriginal", "1");
>     builder.addTokenFilter(StopFilterFactory.class);
>     builder.addTokenFilter(FlattenGraphFilterFactory.class);
>     Analyzer analyzer = builder.build();
>      
>     TokenStream ts = analyzer.tokenStream("*", new StringReader("x7in"));
>     ts.reset();
>     while(ts.incrementToken())
>         ;
> {code}
> This gives:
> {code}
> Exception in thread "main" java.lang.AssertionError: 2
>      at 
> org.apache.lucene.analysis.core.FlattenGraphFilter.releaseBufferedToken(FlattenGraphFilter.java:195)
>      at 
> org.apache.lucene.analysis.core.FlattenGraphFilter.incrementToken(FlattenGraphFilter.java:258)
>      at com.wolfram.textsearch.AnalyzerError.main(AnalyzerError.java:32)
> {code}
> Maybe removing stop words after WordDelimiterGraphFilter is wrong, I don't 
> know. However is the only way to process stop-words generated by that filter. 
> In any case, it should not eat tokens or produce assertions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794013#comment-16794013
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 209d71c08c9af8f32e47e29704d0bc7db4899f3f in lucene-solr's branch 
refs/heads/branch_8_0 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=209d71c ]

SOLR-12923: Mea culpa: Remove useless import of java.lang... that breaks 
precommit

(cherry picked from commit 5c143022e7abcdf14a570786afec4ff099fd581c)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794011#comment-16794011
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit b7546fd19ba29cc2046ee73bf4e145667f5252b6 in lucene-solr's branch 
refs/heads/branch_8_0 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b7546fd ]

SOLR-12923: tweak the randomization in testCreateLargeSimCollections to reduce 
the max possible totalCores

also decrease the number of iters while increase the cluster shape wait time to 
reduce the risk of spurious failures on machines under heavy contention w/o 
making the the test any slower on average

(cherry picked from commit c79aeee5f9a013c280a76a8d6b04bea63f212909)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794012#comment-16794012
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 4552eeddc613a5e143c97963b504102c6f41af46 in lucene-solr's branch 
refs/heads/branch_8_0 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4552eed ]

SOLR-12923: fix SimClusterStateProvider to use lock.lockInterruptibly() 
exclusively, and make SimCloudManager's Callable checks tollerant of Callables 
that may have failed related to interrupts w/o explicitly throwing 
InterruptedException

(cherry picked from commit 1a54c6b19db9dcb1081e43614bf479e0ac7bf177)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794010#comment-16794010
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit f1e37f44175e10db7abfc88590e695374af59ff3 in lucene-solr's branch 
refs/heads/branch_8_0 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f1e37f4 ]

SOLR-12923: Fix some issues w/concurrency and exception swallowing in 
SimClusterStateProvider/SimCloudManager

There are 3 tightly related bug fixes in these changes:

1) ConcurrentModificationExceptions were being thrown by some 
SimClusterStateProvider methods when
   creating collections/replicas due to the use of ArrayLists nodeReplicaMap. 
These ArrayLists were changed
   to use synchronizedList wrappers.
2) The Exceptions from #1 were being swallowed/hidden by code using 
SimCloudManager.submit() w/o checking
   the result of the resulting Future object. (As a result, tests waiting for a 
particular ClusterShape
   would timeout regardless of how long they waited.)   To protect against 
"silent" failures like this,
   this SimCloudManager.submit() has been updated to wrap all input Callables 
such that any uncaught errors
   will be logged and "counted."  SimSolrCloudTestCase will ensure a suite 
level failure if any such failures
   are counted.
3) The changes in #2 exposed additional concurrency problems with the Callables 
involved in leader election:
   These would frequently throw IllegalStateExceptions due to assumptions about 
the state/existence of
   replicas when the Callables were created vs when they were later run -- 
notably a Callable may have been
   created that held a reference to a Slice, but by the time that Callable was 
run the collection (or a
   node, etc...) refered to by that Slice may have been deleted.  While fixing 
this, the leader election
   logic was also cleaned up such that adding a replica only triggers leader 
election for that shard, not
   every shard in the collection.

While auditing this code, cleanup was also done to ensure all usage of 
SimClusterStateProvider.lock was
also cleaned up to remove all risky points where an exception may have been 
possible after aquiring the
lock but before the try/finally that ensured it would be unlocked.

(cherry picked from commit 76babf876a49f82959cc36a1d7ef922a9c2dddff)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Windows (64bit/jdk-13-ea+8) - Build # 99 - Still unstable!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/99/
Java: 64bit/jdk-13-ea+8 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudHttp2SolrClientTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:99)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:779)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:976)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:835)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:517)  
at org.apache.solr.core.SolrCore.(SolrCore.java:968)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:835)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1063)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:835)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.tr

[jira] [Resolved] (SOLR-13328) HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates connection

2019-03-15 Thread jefferyyuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jefferyyuan resolved SOLR-13328.

Resolution: Not A Problem

> HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates 
> connection
> ---
>
> Key: SOLR-13328
> URL: https://issues.apache.org/jira/browse/SOLR-13328
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.0
>Reporter: jefferyyuan
>Priority: Minor
> Fix For: 8.0.1, 8.1
>
>
> In SolrHttpClientBuilder, we can configure a lot of things including 
> HostnameVerifier.
> We have code like below:
> HttpClientUtil.setHttpClientBuilder(new CommonNameVerifierClientConfigurer());
> CommonNameVerifierClientConfigurer will set our own HostnameVerifier which 
> checks subject dn name.
> But this doesn't work as when we create SSLConnectionSocketFactory at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry() we don't 
> check and use HostnameVerifier in SolrHttpClientBuilder at all.
> The fix would be very simple, at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry, if 
> HostnameVerifier in SolrHttpClientBuilder is not null, use it, otherwise same 
> logic as before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793987#comment-16793987
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 0ac45c166850040091d043310efef9700179 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0ac45c1 ]

SOLR-12923: tweak the randomization in testCreateLargeSimCollections to reduce 
the max possible totalCores

also decrease the number of iters while increase the cluster shape wait time to 
reduce the risk of spurious failures on machines under heavy contention w/o 
making the the test any slower on average

(cherry picked from commit c79aeee5f9a013c280a76a8d6b04bea63f212909)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793988#comment-16793988
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 87ec0c3600982215701829e0ebf687a3b76436b4 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=87ec0c3 ]

SOLR-12923: fix SimClusterStateProvider to use lock.lockInterruptibly() 
exclusively, and make SimCloudManager's Callable checks tollerant of Callables 
that may have failed related to interrupts w/o explicitly throwing 
InterruptedException

(cherry picked from commit 1a54c6b19db9dcb1081e43614bf479e0ac7bf177)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793989#comment-16793989
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 695bfa3c908e36d771ca7275e465d0f26f4cae11 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=695bfa3 ]

SOLR-12923: Mea culpa: Remove useless import of java.lang... that breaks 
precommit

(cherry picked from commit 5c143022e7abcdf14a570786afec4ff099fd581c)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793986#comment-16793986
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 653ba8d245a10c311eeb48321e89e1027cb3472d in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=653ba8d ]

SOLR-12923: Fix some issues w/concurrency and exception swallowing in 
SimClusterStateProvider/SimCloudManager

There are 3 tightly related bug fixes in these changes:

1) ConcurrentModificationExceptions were being thrown by some 
SimClusterStateProvider methods when
   creating collections/replicas due to the use of ArrayLists nodeReplicaMap. 
These ArrayLists were changed
   to use synchronizedList wrappers.
2) The Exceptions from #1 were being swallowed/hidden by code using 
SimCloudManager.submit() w/o checking
   the result of the resulting Future object. (As a result, tests waiting for a 
particular ClusterShape
   would timeout regardless of how long they waited.)   To protect against 
"silent" failures like this,
   this SimCloudManager.submit() has been updated to wrap all input Callables 
such that any uncaught errors
   will be logged and "counted."  SimSolrCloudTestCase will ensure a suite 
level failure if any such failures
   are counted.
3) The changes in #2 exposed additional concurrency problems with the Callables 
involved in leader election:
   These would frequently throw IllegalStateExceptions due to assumptions about 
the state/existence of
   replicas when the Callables were created vs when they were later run -- 
notably a Callable may have been
   created that held a reference to a Slice, but by the time that Callable was 
run the collection (or a
   node, etc...) refered to by that Slice may have been deleted.  While fixing 
this, the leader election
   logic was also cleaned up such that adding a replica only triggers leader 
election for that shard, not
   every shard in the collection.

While auditing this code, cleanup was also done to ensure all usage of 
SimClusterStateProvider.lock was
also cleaned up to remove all risky points where an exception may have been 
possible after aquiring the
lock but before the try/finally that ensured it would be unlocked.

(cherry picked from commit 76babf876a49f82959cc36a1d7ef922a9c2dddff)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13328) HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates connection

2019-03-15 Thread jefferyyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793981#comment-16793981
 ] 

jefferyyuan edited comment on SOLR-13328 at 3/15/19 10:08 PM:
--

We are using latest SOlr 7, but seems Solr 8 removes HostnameVerifier from 
SolrHttpClientBuilder, so this Jira doesn't apply any more.


was (Author: yuanyun.cn):
Seems Solr 8 removes HostnameVerifier from SolrHttpClientBuilder, so this Jira 
doesn't apply any more.

> HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates 
> connection
> ---
>
> Key: SOLR-13328
> URL: https://issues.apache.org/jira/browse/SOLR-13328
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.0
>Reporter: jefferyyuan
>Priority: Minor
> Fix For: 8.0.1, 8.1
>
>
> In SolrHttpClientBuilder, we can configure a lot of things including 
> HostnameVerifier.
> We have code like below:
> HttpClientUtil.setHttpClientBuilder(new CommonNameVerifierClientConfigurer());
> CommonNameVerifierClientConfigurer will set our own HostnameVerifier which 
> checks subject dn name.
> But this doesn't work as when we create SSLConnectionSocketFactory at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry() we don't 
> check and use HostnameVerifier in SolrHttpClientBuilder at all.
> The fix would be very simple, at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry, if 
> HostnameVerifier in SolrHttpClientBuilder is not null, use it, otherwise same 
> logic as before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13328) HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates connection

2019-03-15 Thread jefferyyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793981#comment-16793981
 ] 

jefferyyuan commented on SOLR-13328:


Seems Solr 8 removes HostnameVerifier from SolrHttpClientBuilder, so this Jira 
doesn't apply any more.

> HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates 
> connection
> ---
>
> Key: SOLR-13328
> URL: https://issues.apache.org/jira/browse/SOLR-13328
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.0
>Reporter: jefferyyuan
>Priority: Minor
> Fix For: 8.0.1, 8.1
>
>
> In SolrHttpClientBuilder, we can configure a lot of things including 
> HostnameVerifier.
> We have code like below:
> HttpClientUtil.setHttpClientBuilder(new CommonNameVerifierClientConfigurer());
> CommonNameVerifierClientConfigurer will set our own HostnameVerifier which 
> checks subject dn name.
> But this doesn't work as when we create SSLConnectionSocketFactory at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry() we don't 
> check and use HostnameVerifier in SolrHttpClientBuilder at all.
> The fix would be very simple, at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry, if 
> HostnameVerifier in SolrHttpClientBuilder is not null, use it, otherwise same 
> logic as before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13328) HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates connection

2019-03-15 Thread jefferyyuan (JIRA)
jefferyyuan created SOLR-13328:
--

 Summary: HostnameVerifier in HttpClientBuilder is ignored when 
HttpClientUtil creates connection
 Key: SOLR-13328
 URL: https://issues.apache.org/jira/browse/SOLR-13328
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java
Affects Versions: 8.0
Reporter: jefferyyuan
 Fix For: 8.0.1, 8.1


In SolrHttpClientBuilder, we can configure a lot of things including 
HostnameVerifier.

We have code like below:

HttpClientUtil.setHttpClientBuilder(new CommonNameVerifierClientConfigurer());

CommonNameVerifierClientConfigurer will set our own HostnameVerifier which 
checks subject dn name.

But this doesn't work as when we create SSLConnectionSocketFactory at 
HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry() we don't check 
and use HostnameVerifier in SolrHttpClientBuilder at all.

The fix would be very simple, at 
HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry, if 
HostnameVerifier in SolrHttpClientBuilder is not null, use it, otherwise same 
logic as before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13131) Category Routed Aliases

2019-03-15 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793967#comment-16793967
 ] 

Hoss Man commented on SOLR-13131:
-

[~gus_heck] - i really appreciate you adding aliases.adoc as part of this 
issue, but it currently triggers several asciidoc warnings due to the use of 
{{1.}} multiple times in an ordered list (this is supported to make it easier 
for migrating docs from other markup syntaxes, but not recomended - hence the 
warnigs: https://asciidoctor.org/docs/user-manual/#ordered-lists ) ...
{noformat}
 [exec] asciidoctor: WARNING: aliases.adoc: line 26: list item index: 
expected 2, got 1
 [exec] asciidoctor: WARNING: aliases.adoc: line 27: list item index: 
expected 3, got 1
 [exec] asciidoctor: WARNING: aliases.adoc: line 225: list item index: 
expected 2, got 1
 [exec] asciidoctor: WARNING: aliases.adoc: line 226: list item index: 
expected 3, got 1
{noformat}

..could you please update these to use the recommended {{. }} syntax instead?

> Category Routed Aliases
> ---
>
> Key: SOLR-13131
> URL: https://issues.apache.org/jira/browse/SOLR-13131
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: indexingWithCRA.png, indexingwithoutCRA.png, 
> indexintWithoutCRA2.png
>
>
> This ticket is to add a second type of routed alias in addition to the 
> current time routed aliases. The new type of alias will allow data driven 
> creation of collections based on the values of a field and automated 
> organization of these collections under an alias that allows the collections 
> to also be searched as a whole.
> The use case in mind at present is an IOT device type segregation, but I 
> could also see this leading to the ability to direct updates to tenant 
> specific hardware (in cooperation with autoscaling). 
> This ticket also looks forward to (but does not include) the creation of a 
> Dimensionally Routed Alias which would allow organizing time routed data also 
> segregated by device
> Further design details to be added in comments.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793956#comment-16793956
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 5c143022e7abcdf14a570786afec4ff099fd581c in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5c14302 ]

SOLR-12923: Mea culpa: Remove useless import of java.lang... that breaks 
precommit


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2019-03-15 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793931#comment-16793931
 ] 

Gus Heck commented on SOLR-7642:


Yeah hit this recently too. Minor annoyance for an experienced user, possible 
hurdle for newer users following posts on SO, etc... I tend to think that maybe 
such a feature should hinge on a user friendly argument at startup such as 
--new rather than (or in addition to) a sysprop.

> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch, SOLR-7642_tag_7.5.0_proposition.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8477) Improve handling of inner disjunctions in intervals

2019-03-15 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793928#comment-16793928
 ] 

Alan Woodward commented on LUCENE-8477:
---

Here's a better patch, using term counting rather than prefix matching - the 
latter won't work if we have stacked tokens, for example, and this makes things 
much simpler.

> Improve handling of inner disjunctions in intervals
> ---
>
> Key: LUCENE-8477
> URL: https://issues.apache.org/jira/browse/LUCENE-8477
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8477.patch, LUCENE-8477.patch
>
>
> The current implementation of the disjunction interval produced by 
> {{Intervals.or}} is a direct implementation of the OR operator from the Vigna 
> paper.  This produces minimal intervals, meaning that (a) is preferred over 
> (a b), and (b) also over (a b).  This has advantages when it comes to 
> counting intervals for scoring, but also has drawbacks when it comes to 
> matching.  For example, a phrase query for ((a OR (a b)) BLOCK (c)) will not 
> match the document (a b c), because (a) will be preferred over (a b), and (a 
> c) does not match.
> This ticket is to discuss the best way of dealing with disjunctions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8477) Improve handling of inner disjunctions in intervals

2019-03-15 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8477:
--
Attachment: LUCENE-8477.patch

> Improve handling of inner disjunctions in intervals
> ---
>
> Key: LUCENE-8477
> URL: https://issues.apache.org/jira/browse/LUCENE-8477
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8477.patch, LUCENE-8477.patch
>
>
> The current implementation of the disjunction interval produced by 
> {{Intervals.or}} is a direct implementation of the OR operator from the Vigna 
> paper.  This produces minimal intervals, meaning that (a) is preferred over 
> (a b), and (b) also over (a b).  This has advantages when it comes to 
> counting intervals for scoring, but also has drawbacks when it comes to 
> matching.  For example, a phrase query for ((a OR (a b)) BLOCK (c)) will not 
> match the document (a b c), because (a) will be preferred over (a b), and (a 
> c) does not match.
> This ticket is to discuss the best way of dealing with disjunctions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_172) - Build # 7784 - Still Unstable!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7784/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:517)  
at org.apache.solr.core.SolrCore.(SolrCore.java:968)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:509)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:351) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:422) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1191)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:99)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:779)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:976)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1063)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr

[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793877#comment-16793877
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 1a54c6b19db9dcb1081e43614bf479e0ac7bf177 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1a54c6b ]

SOLR-12923: fix SimClusterStateProvider to use lock.lockInterruptibly() 
exclusively, and make SimCloudManager's Callable checks tollerant of Callables 
that may have failed related to interrupts w/o explicitly throwing 
InterruptedException


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13327) Randomize tests to use both sync and async logging

2019-03-15 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-13327:
-

 Summary: Randomize tests to use both sync  and async logging
 Key: SOLR-13327
 URL: https://issues.apache.org/jira/browse/SOLR-13327
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Reporter: Erick Erickson
Assignee: Erick Erickson


Going to async logging is surfacing some issues with tests at least as well as 
the Prometheus exporter. I'm fixing things up as I can, but assuming that we 
get all the tests in shape and anything like SOLR-13290 fixed, I want to be 
confident that the fixes for async logging don't adversely affect synchronous 
logging.

I don't have a clue yet _how_ to have the test framework use one or the other, 
so far the difference is entirely in the configuration files. I suppose one 
variant would be to have separate sync and async config files and use one or 
the other, but I'd like to find a more elegant way. Any ideas welcome.

Assigning to myself to keep track of it but I wouldn't be hurt if someone 
wanted to grab it ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13268) Clean up any test failures resulting from defaulting to async logging

2019-03-15 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-13268:
--
Priority: Blocker  (was: Major)

I'm moving this to blocker for the 8.1 release, this does _not_ affect 8.0.

Either this gets fixed before the release (and the behavior is very wonky, 
it'll take a while I'm afraid) or we change the log4j2.xml files to go back to 
synchronous logging before release.

I do ___not_ think that we need to rollback the rest of SOLR-12055 in that 
case, but that's certainly open for discussion.

> Clean up any test failures resulting from defaulting to async logging
> -
>
> Key: SOLR-13268
> URL: https://issues.apache.org/jira/browse/SOLR-13268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Blocker
> Attachments: SOLR-13268.patch, SOLR-13268.patch, SOLR-13268.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This is a catch-all for test failures due to the async logging changes. So 
> far, the I see a couple failures on JDK13 only. I'll collect a "starter set" 
> here, these are likely systemic, once the root cause is found/fixed, then 
> others are likely fixed as well.
> JDK13:
> ant test  -Dtestcase=TestJmxIntegration -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=lv-LV 
> -Dtests.timezone=Asia/Riyadh -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> ant test  -Dtestcase=TestDynamicURP -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=rwk 
> -Dtests.timezone=Australia/Brisbane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13290) Prometheus metric exporter AsyncLogger: java.lang.NoClassDefFoundError

2019-03-15 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-13290:
--
Priority: Major  (was: Critical)

You should be able to switch the comments around in your log4j.xml files to go 
back to using synchronous logging as a work-around while the async logging 
settles out.

Assigning to myself to keep track of it, but anyone with more knowledge of the 
Prometheus exporter please feel free. I suspect all that's necessary is to add 
the classpath for "/core/lib/disruptor-3.4.2.jar" to "the right place".

> Prometheus metric exporter AsyncLogger: java.lang.NoClassDefFoundError
> --
>
> Key: SOLR-13290
> URL: https://issues.apache.org/jira/browse/SOLR-13290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 8.0, 8.1
>Reporter: Karl Stoney
>Assignee: Erick Erickson
>Priority: Major
>
> Since this 
> commit:[https://github.com/apache/lucene-solr/commit/02eb9d34404b8fc7225ee7c5c867e194afae17a0]
> The metrics exporter in branch_8x no longer starts
> {code:java}
> 2019-03-04 16:06:01,070 main ERROR Unable to invoke factory method in class 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig for element 
> AsyncLogger: java.lang.NoClassDefFoundError
> : com/lmax/disruptor/EventFactory java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:136)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:964)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:904)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:896)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:514)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:238)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:250)
>  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:548)
>  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:620)
>  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:637)
>  at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
>  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
>  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
>  at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
>  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:121)
>  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
>  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46)
>  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
>  at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358)
>  at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>  at 
> org.apache.solr.prometheus.exporter.SolrExporter.(SolrExporter.java:48)
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.getAsyncLoggerConfigDelegate(AbstractConfiguration.java:203)
>  at 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig.(AsyncLoggerConfig.java:91)
>  at 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig.createLogger(AsyncLoggerConfig.java:273)
>  ... 25 more
> Caused by: java.lang.ClassNotFoundException: com.lmax.disruptor.EventFactory
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 28 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13290) Prometheus metric exporter AsyncLogger: java.lang.NoClassDefFoundError

2019-03-15 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-13290:
-

Assignee: Erick Erickson

> Prometheus metric exporter AsyncLogger: java.lang.NoClassDefFoundError
> --
>
> Key: SOLR-13290
> URL: https://issues.apache.org/jira/browse/SOLR-13290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 8.0, 8.1
>Reporter: Karl Stoney
>Assignee: Erick Erickson
>Priority: Critical
>
> Since this 
> commit:[https://github.com/apache/lucene-solr/commit/02eb9d34404b8fc7225ee7c5c867e194afae17a0]
> The metrics exporter in branch_8x no longer starts
> {code:java}
> 2019-03-04 16:06:01,070 main ERROR Unable to invoke factory method in class 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig for element 
> AsyncLogger: java.lang.NoClassDefFoundError
> : com/lmax/disruptor/EventFactory java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:136)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:964)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:904)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:896)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:514)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:238)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:250)
>  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:548)
>  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:620)
>  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:637)
>  at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
>  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
>  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
>  at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
>  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:121)
>  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
>  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46)
>  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
>  at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358)
>  at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>  at 
> org.apache.solr.prometheus.exporter.SolrExporter.(SolrExporter.java:48)
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.getAsyncLoggerConfigDelegate(AbstractConfiguration.java:203)
>  at 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig.(AsyncLoggerConfig.java:91)
>  at 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig.createLogger(AsyncLoggerConfig.java:273)
>  ... 25 more
> Caused by: java.lang.ClassNotFoundException: com.lmax.disruptor.EventFactory
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 28 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13252) NPE trying to set autoscaling policy for existing collection

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13252:

Affects Version/s: (was: 8.x)

> NPE trying to set autoscaling policy for existing collection
> 
>
> Key: SOLR-13252
> URL: https://issues.apache.org/jira/browse/SOLR-13252
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-13252.patch
>
>
> Steps to reproduce:
> * create a collection without collection-specific policy, eg. {{test}}
> * define a collection-specific policy {{policy1}}:
> {code}
> POST http://localhost:8983/solr/admin/autoscaling
> {
> "set-policy": 
>   {
>   "policy1" :[
>   {"replica": "<2", "shard": "#EACH", "node": "#ANY"}
>   ]
>   }
> }
> {code}
> * try to modify the collection to use this policy
> {code}
> http://localhost:8983/solr/admin/collections?action=MODIFYCOLLECTION&collection=test&policy=policy1
> {code}
> A NullPointerException is thrown due to the previous value of the "policy" 
> property being absent:
> {code}
> 2019-02-14 18:48:17.007 ERROR 
> (OverseerThreadFactory-9-thread-5-processing-n:192.168.0.69:8983_solr) 
> [c:test   ] o.a.s.c.a.c.OverseerCollectionMessageHandler Collection: test 
> operation: modifycollection failed:java.lang.NullPointerException
> at 
> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.modifyCollection(OverseerCollectionMessageHandler.java:687)
> at 
> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:292)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:496)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.0 - Build # 32 - Unstable

2019-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.0/32/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp

Error Message:
{numFound=87500,start=0,docs=[]} expected:<10> but was:<87500>

Stack Trace:
java.lang.AssertionError: {numFound=87500,start=0,docs=[]} 
expected:<10> but was:<87500>
at 
__randomizedtesting.SeedInfo.seed([8BD16C2213FE5C18:AA8F2A801FD082B9]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp(TestSimExtremeIndexing.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13908 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing
   [junit4]   2> Creating dataDir: 
/home/jenkins/je

[jira] [Updated] (SOLR-13290) Prometheus metric exporter AsyncLogger: java.lang.NoClassDefFoundError

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13290:

Affects Version/s: (was: 8.x)
   8.1
   8.0

> Prometheus metric exporter AsyncLogger: java.lang.NoClassDefFoundError
> --
>
> Key: SOLR-13290
> URL: https://issues.apache.org/jira/browse/SOLR-13290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 8.0, 8.1
>Reporter: Karl Stoney
>Priority: Critical
>
> Since this 
> commit:[https://github.com/apache/lucene-solr/commit/02eb9d34404b8fc7225ee7c5c867e194afae17a0]
> The metrics exporter in branch_8x no longer starts
> {code:java}
> 2019-03-04 16:06:01,070 main ERROR Unable to invoke factory method in class 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig for element 
> AsyncLogger: java.lang.NoClassDefFoundError
> : com/lmax/disruptor/EventFactory java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:136)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:964)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:904)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:896)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:514)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:238)
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:250)
>  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:548)
>  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:620)
>  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:637)
>  at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
>  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
>  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
>  at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
>  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:121)
>  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
>  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46)
>  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
>  at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358)
>  at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>  at 
> org.apache.solr.prometheus.exporter.SolrExporter.(SolrExporter.java:48)
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.getAsyncLoggerConfigDelegate(AbstractConfiguration.java:203)
>  at 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig.(AsyncLoggerConfig.java:91)
>  at 
> org.apache.logging.log4j.core.async.AsyncLoggerConfig.createLogger(AsyncLoggerConfig.java:273)
>  ... 25 more
> Caused by: java.lang.ClassNotFoundException: com.lmax.disruptor.EventFactory
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 28 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13263) Facet Heat Map should support GeoJSON

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13263:

Affects Version/s: (was: 8.x)
   8.1

> Facet Heat Map should support GeoJSON
> -
>
> Key: SOLR-13263
> URL: https://issues.apache.org/jira/browse/SOLR-13263
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting
>Affects Versions: 8.0, 8.1, master (9.0)
>Reporter: Bar Rotstein
>Priority: Major
>  Labels: Facets, Geolocation, facet, faceting, geo
> Attachments: SOLR-13263-nocommit.patch
>
>
> Currently Facet Heatmap(Geographical facets) do not support any other 
> subjects other than WKT or '[ ]'. This seems to be caused since 
> FacetHeatmap.Parser#parse uses SpatialUtils#parseGeomSolrException, which in 
> turn uses a deprecated JTS method (SpatialContext#readShapeFromWkt) to parse 
> the string input.
> The newer method of parsing a String to a Shape object should be used, makes 
> the code a lot cleaner and should support more formats (including GeoJSON).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-03-15 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793830#comment-16793830
 ] 

Kevin Risden commented on SOLR-11763:
-

Side note found the comment about "8.x" fix version while cleaning up 8.x -> 
8.1.

 

https://issues.apache.org/jira/browse/SOLR-12999?focusedCommentId=16758679&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16758679

 

Either way there is only 8.1 version now (we could always rename it back to 
8.x). It was bugging me there was both 8.1 and 8.x.

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13178) ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs that are not objects

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13178:

Fix Version/s: (was: 8.x)
   (was: master (9.0))

> ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs 
> that are not objects
> ---
>
> Key: SOLR-13178
> URL: https://issues.apache.org/jira/browse/SOLR-13178
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.5, master (9.0)
> Environment: Running on Unix, using a git checkout close to master.
> h2. Steps to reproduce
>  * Build commit ea2c8ba of Solr as described in the section below.
>  * Build the films collection as described below.
>  * Start the server using the command {{“./bin/solr start -f -p 8983 -s 
> /tmp/home”}}
>  * Request the URL above.
> h2. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h2. Building the collection
> We followed Exercise 2 from the quick start tutorial 
> ([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - 
> for reference, I have attached a copy of the database.
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> [http://localhost:8983/solr/films/schema]
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Johannes Kloos
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting any of the following URLs gives a 500 error due to a 
> ClassCastException in o.a.s.r.j.ObjectUtil.mergeObjects:
>  * [http://localhost:8983/solr/films/select?json=0]
>  * [http://localhost:8983/solr/films/select?json.facet=1&json.facet.field=x]
> The error response is caused by uncaught ClassCastExceptions, such as (for 
> the first URL):
> {\{ java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.util.Map}}
>  {{at 
> org.apache.solr.request.json.ObjectUtil.mergeObjects(ObjectUtil.java:108)}}
>  {{at 
> org.apache.solr.request.json.RequestUtil.mergeJSON(RequestUtil.java:269)}}
>  {{at 
> org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:180)}}
>  {{at 
> org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:167)}}
>  {{at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:196)}}
>  {{[...]}}
> {{The culprit seems to be the o.a.s.r.j.RequestUtil.mergeJSON method, in 
> particular the following fragment:}}
>  {{    Object o = ObjectBuilder.fromJSON(jsonStr);}}
>  {{    // zero-length strings or comments can cause this to be null (and 
> a zero-length string can result from a json content-type w/o a body)}}
>  {{    if (o != null) {}}
>  {{  ObjectUtil.mergeObjects(json, path, o, handler);}}
>      }
> Note that o is an Object representing a JSON _value_, while SOLR seems to 
> expect that o holds a JSON _object_. But in the examples above, the JSON 
> value is a number (represented by  a Long object) instead - this is, in fact, 
> valid JSON.
> A possible fix could be to use the getObject method of ObjectUtil instead of 
> blindly calling fromJSON.
> This bug was found using [Diffblue Microservices 
> Testing|http://www.diffblue.com/labs]. Find more information on this [test 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13230) Move JaSpell code into Solr and deprecate the factory

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13230:

Fix Version/s: (was: 8.x)
   8.1

> Move JaSpell code into Solr and deprecate the factory
> -
>
> Key: SOLR-13230
> URL: https://issues.apache.org/jira/browse/SOLR-13230
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13230.patch
>
>
> The JaSpell suggester is deprecated, and will be removed from the lucene 
> codebase.  However, it's currently the default implementation for suggesters 
> in Solr, and its Solr factory is *not* deprecated, so users won't have been 
> receiving any warnings.
> I suggest that we deprecate the factory, and move the relevant bits of code 
> from lucene into Solr, as a first step.  In a follow-up we should change the 
> default implementation (possibly by just removing the default, and forcing 
> people to choose a factory?) and remove the deprecated code form Solr as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13293:

Affects Version/s: (was: 8.x)
   8.0

> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> -
>
> Key: SOLR-13293
> URL: https://issues.apache.org/jira/browse/SOLR-13293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.0
>Reporter: Karl Stoney
>Priority: Minor
>
> Hi, 
> Testing out branch_8x, we're randomly seeing the following errors on a simple 
> 3 node cluster.  It doesn't appear to affect replication (the cluster remains 
> green).
> They come in (mass, literally 1000s at a time) bulk.
> There we no network issues at the time.
> {code:java}
> 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 
> r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk 
> s:shard1] ERROR 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> java.nio.channels.AsynchronousCloseException: null
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191]
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root
> - 2019-03-04 16:30:04]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04
> 16:30:04]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13292) Provide extended per-segment status of a collection

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13292:

Affects Version/s: (was: 8.x)
   8.0

> Provide extended per-segment status of a collection
> ---
>
> Key: SOLR-13292
> URL: https://issues.apache.org/jira/browse/SOLR-13292
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.0, master (9.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13292.patch, SOLR-13292.patch, adminSegments.json, 
> adminSegments.json, colstatus.json, colstatus.json
>
>
> When changing a collection configuration or schema there may be non-obvious 
> conflicts between existing data and the new configuration or the newly 
> declared schema. A similar situation arises when upgrading Solr to a new 
> version while keeping the existing data.
> Currently the {{SegmentsInfoRequestHandler}} provides insufficient 
> information to detect such conflicts. Also, there's no collection-wide 
> command to gather such status from all shard leaders.
> This issue proposes extending the {{/admin/segments}} handler to provide more 
> low-level Lucene details about the segments, including potential conflicts 
> between existing segments' data and the current declared schema. It also adds 
> a new COLSTATUS collection command to report an aggregated status from all 
> shards, and optionally for all collections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13215) Upgrade dropwizard metrics to 4.0.5

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13215:

Affects Version/s: (was: 8.x)
   8.0

> Upgrade dropwizard metrics to 4.0.5
> ---
>
> Key: SOLR-13215
> URL: https://issues.apache.org/jira/browse/SOLR-13215
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.7, 7.6.1, 8.0
>Reporter: Henrik
>Priority: Major
>
> This removes the ganglia reporter which is now missing from the metrics 
> library.
> See [https://github.com/dropwizard/metrics/issues/1319]
>  
> Pull request in: [https://github.com/apache/lucene-solr/pull/561]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk sp

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13060:

Fix Version/s: (was: 8.x)
   8.1

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13060.patch, 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9840) Add a unit test for LDAP integration

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9840:
---
Fix Version/s: (was: 8.x)
   8.1

> Add a unit test for LDAP integration
> 
>
> Key: SOLR-9840
> URL: https://issues.apache.org/jira/browse/SOLR-9840
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9840.patch, SOLR-9840.patch, SOLR-9840.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> SOLR-9513 introduced a generic Hadoop authentication plugin which can be used 
> to configure LDAP authentication functionality in Hadoop. This jira is to 
> track the work required for adding a unit test for LDAP integration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12999) Index replication could delete segments first

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12999:

Fix Version/s: (was: 8.x)
   master (9.0)
   8.1

> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-12999.patch, SOLR-12999.patch
>
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would probably only be 
> useful if there is no SolrIndexSearcher open, since it would prevent the 
> removal of files.
> The motivating scenario is a SolrCloud replica that is going into full 
> recovery.  It ought to not be fielding searches.  The code changes would not 
> depend on SolrCloud though.
> This option would have some danger the user should be aware of.  If the 
> replication fails, leaving the local files incomplete/corrupt, the only 
> recourse is to try full replication again.  You can't just give up and field 
> queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13201) NullPointerException in ConcurrentHashMap caused by passing null to get mmethod in org/apache/solr/schema/IndexSchema.java[1201]

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13201:

Fix Version/s: (was: 8.x)
   8.1

> NullPointerException in ConcurrentHashMap caused by passing null to get 
> mmethod in org/apache/solr/schema/IndexSchema.java[1201]
> 
>
> Key: SOLR-13201
> URL: https://issues.apache.org/jira/browse/SOLR-13201
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
> Fix For: 8.1, master (9.0)
>
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?q=initial_release_date:[*%20TO%20NOW-18YEAR]&wt=php&json.facet.facet.field=2
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-19) [   x:films] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.schema.IndexSchema.getFieldOrNull(IndexSchema.java:1201)
>   at org.apache.solr.schema.IndexSchema.getField(IndexSchema.java:1225)
>   at 
> org.apache.solr.search.facet.FacetField.createFacetProcessor(FacetField.java:118)
>   at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:397)
>   at 
> org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
>   at 
> org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
>   at 
> org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
>   at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:401)
>   at 
> org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   [...]
> {noformat}
> There is called method 'get' on the member 
> 'org.apache.solr.schema.IndexSchema.dynamicFieldCache' (which os a 
> 'ConcurrentHashMap') with null as an argument; that leads to a crash inside 
> 'get' method. The null value (passed to 'get' method) comes from from member 
> 'field' of 'org.apache.solr.search.facet.FacetField' instance' at 
> org/apache/solr/search/facet/FacetField.java[118].
> We found this bug using [Diffblue Micr

[jira] [Updated] (SOLR-12322) Select specific field list for child documents using ChildDocTransformerFactory

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12322:

Fix Version/s: (was: 8.x)
   (was: master (9.0))

> Select specific field list for child documents using 
> ChildDocTransformerFactory
> ---
>
> Key: SOLR-12322
> URL: https://issues.apache.org/jira/browse/SOLR-12322
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers, search
>Affects Versions: 6.6
>Reporter: adeppa
>Priority: Minor
> Attachments: SOLR-12322.patch, doc_level_exaplantion.docx
>
>
> With the current version of SOLR and nested indexing, when you are fetch a 
> parent record it returns all the fields of its children. This is increasing 
> the size of data being returned from SOLR and also hits our performance 
> sometime.
> This ticket will be used to update the ChildDocTransformerFactory class with 
> additional parameters to specify the list of fields to be pulled for child 
> documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13229) ZkController.giveupLeadership should cleanup the replicasMetTragicEvent map after all exceptions

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13229:

Fix Version/s: (was: 8.x)
   master (9.0)
   8.1

> ZkController.giveupLeadership should cleanup the replicasMetTragicEvent map 
> after all exceptions
> 
>
> Key: SOLR-13229
> URL: https://issues.apache.org/jira/browse/SOLR-13229
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently {{ZkController.giveupLeadership}} cleans up the 
> {{replicasMetTragicEvent}} after {{Keeper|Interrupted Exceptions}}, all other 
> exceptions should also cleanup



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13213) Search Components cannot modify "shards" parameter

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13213:

Fix Version/s: (was: 8.x)

> Search Components cannot modify "shards" parameter
> --
>
> Key: SOLR-13213
> URL: https://issues.apache.org/jira/browse/SOLR-13213
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When creating a custom search component for a customer, I realised that 
> modifying "shards" parameter in {{prepare()}} is not possible since in 
> {{SearchHandler}}, the {{ShardHandler}} is initialised based on "shards" 
> parameter just *before* search components are consulted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13262) Implement collection RENAME command

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13262:

Fix Version/s: (was: 8.x)
   8.1

> Implement collection RENAME command
> ---
>
> Key: SOLR-13262
> URL: https://issues.apache.org/jira/browse/SOLR-13262
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.0, master (9.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> There's no RENAME collection command, which makes it unnecessarily difficult 
> to manage long-term collection life-cycles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12708) Async collection actions should not hide failures

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12708:

Fix Version/s: (was: 8.x)
   8.1

> Async collection actions should not hide failures
> -
>
> Key: SOLR-12708
> URL: https://issues.apache.org/jira/browse/SOLR-12708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Backup/Restore
>Affects Versions: 7.4
>Reporter: Mano Kovacs
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Async collection API may hide failures compared to sync version. 
> [OverseerCollectionMessageHandler::processResponses|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java#L744]
>  structures errors differently in the response, that hides failures from most 
> evaluators. RestoreCmd did not receive, nor handle async addReplica issues.
> Sample create collection sync and async result with invalid solrconfig.xml:
> {noformat}
> {
> "responseHeader":{
> "status":0,
> "QTime":32104},
> "failure":{
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard1_replica_n1': Unable to create core [name4_shard1_replica_n1] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard2_replica_n2': Unable to create core [name4_shard2_replica_n2] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard1_replica_n2': Unable to create core [name4_shard1_replica_n2] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard2_replica_n1': Unable to create core [name4_shard2_replica_n1] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup."}
> }
> {noformat}
> vs async:
> {noformat}
> {
> "responseHeader":{
> "status":0,
> "QTime":3},
> "success":{
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":12}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":3}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":11}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":12}}},
> "myTaskId2709146382836":{
> "responseHeader":{
> "status":0,
> "QTime":1},
> "STATUS":"failed",
> "Response":"Error CREATEing SolrCore 'name_shard2_replica_n2': Unable to 
> create core [name_shard2_replica_n2] Caused by: The content of elements must 
> consist of well-formed character data or markup."},
> "status":{
> "state":"completed",
> "msg":"found [myTaskId] in completed tasks"}}
> {noformat}
> Proposing adding failure node to the results, keeping backward compatible but 
> correct result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13271) Implement a read-only mode for a collection

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13271:

Affects Version/s: (was: 8.x)
   8.0

> Implement a read-only mode for a collection
> ---
>
> Key: SOLR-13271
> URL: https://issues.apache.org/jira/browse/SOLR-13271
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.0, master (9.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13271.patch, SOLR-13271.patch
>
>
> Spin-off from SOLR-11127. In some scenarios it's useful to be able to block 
> any index updates to a collection, while still being able to search its 
> contents.
> Currently the scope of this issue is SolrCloud, ie. standalone Solr will not 
> be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13271) Implement a read-only mode for a collection

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13271:

Fix Version/s: (was: 8.x)
   8.1

> Implement a read-only mode for a collection
> ---
>
> Key: SOLR-13271
> URL: https://issues.apache.org/jira/browse/SOLR-13271
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.x, master (9.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13271.patch, SOLR-13271.patch
>
>
> Spin-off from SOLR-11127. In some scenarios it's useful to be able to block 
> any index updates to a collection, while still being able to search its 
> contents.
> Currently the scope of this issue is SolrCloud, ie. standalone Solr will not 
> be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11876:

Fix Version/s: (was: 8.x)
   8.1

> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 7.7.1, 8.1, master (9.0)
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcess

[jira] [Updated] (SOLR-9762) Remove the workaround implemented for HADOOP-13346

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9762:
---
Fix Version/s: (was: 8.x)
   8.1

> Remove the workaround implemented for HADOOP-13346
> --
>
> Key: SOLR-9762
> URL: https://issues.apache.org/jira/browse/SOLR-9762
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.0
>Reporter: Hrishikesh Gadre
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9762.patch, SOLR-9762.patch
>
>
> Here is the workaround that needs to be removed:
>  * 
> [https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/security/KerberosPlugin.java#L230]
>  * 
> https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/security/HadoopAuthPlugin.java#L247



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7321) Remove reflection in FSHDFSUtils.java

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-7321:
---
Fix Version/s: (was: 8.x)
   8.1

> Remove reflection in FSHDFSUtils.java
> -
>
> Key: SOLR-7321
> URL: https://issues.apache.org/jira/browse/SOLR-7321
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, SolrCloud
>Reporter: Mike Drob
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-7321.patch, SOLR-7321.patch
>
>
> When we copied FSHDFSUtil from HBase in SOLR-6969 we also carried over their 
> compatability shims for both Hadoop 1 and Hadoop 2. Since we only support 
> Hadoop 2, we don't need to do reflection in this class and can just invoke 
> the methods directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9763) Remove the workaround implemented for HADOOP-12767

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9763:
---
Fix Version/s: (was: 8.x)
   8.1

> Remove the workaround implemented for HADOOP-12767
> --
>
> Key: SOLR-9763
> URL: https://issues.apache.org/jira/browse/SOLR-9763
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.0
>Reporter: Hrishikesh Gadre
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9763.patch
>
>
> Here is the workaround that needs to be removed:
>  * 
> https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/security/HadoopAuthFilter.java#L83
>  * 
> https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/security/DelegationTokenKerberosFilter.java#L107



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8182) TestSolrCloudWithKerberosAlt fails consistently on JDK9

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8182:
---
Fix Version/s: (was: 8.x)
   8.1

> TestSolrCloudWithKerberosAlt fails consistently on JDK9
> ---
>
> Key: SOLR-8182
> URL: https://issues.apache.org/jira/browse/SOLR-8182
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, security, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Kevin Risden
>Priority: Minor
>  Labels: Java9
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-8182.patch
>
>
> The test fails consistently on JDK9 with the following initialization error:
> {code}
> FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
> Error Message:
> org.apache.directory.api.ldap.model.exception.LdapOtherException: 
> ERR_04447_CANNOT_NORMALIZE_VALUE Cannot normalize the wrapped value 
> ERR_04473_NOT_VALID_VALUE Not a valid value '20090818022733Z' for the 
> AttributeType 'ATTRIBUTE_TYPE ( 1.3.6.1.4.1.18060.0.4.1.2.35  NAME 
> 'schemaModifyTimestamp'  DESC time which schema was modified  SUP 
> modifyTimestamp  EQUALITY generalizedTimeMatch  ORDERING 
> generalizedTimeOrderingMatch  SYNTAX 1.3.6.1.4.1.1466.115.121.1.24  USAGE 
> directoryOperation  ) '
> Stack Trace:
> org.apache.directory.api.ldap.model.exception.LdapOtherException: 
> org.apache.directory.api.ldap.model.exception.LdapOtherException: 
> ERR_04447_CANNOT_NORMALIZE_VALUE Cannot normalize the wrapped value 
> ERR_04473_NOT_VALID_VALUE Not a valid value '20090818022733Z' for the 
> AttributeType 'ATTRIBUTE_TYPE ( 1.3.6.1.4.1.18060.0.4.1.2.35
>  NAME 'schemaModifyTimestamp'
>  DESC time which schema was modified
>  SUP modifyTimestamp
>  EQUALITY generalizedTimeMatch
>  ORDERING generalizedTimeOrderingMatch
>  SYNTAX 1.3.6.1.4.1.1466.115.121.1.24
>  USAGE directoryOperation
>  )
> '
> at 
> __randomizedtesting.SeedInfo.seed([321A63D948BF59B7:FC2CDF5705107C7]:0)
> at 
> org.apache.directory.server.core.api.partition.AbstractPartition.initialize(AbstractPartition.java:84)
> at 
> org.apache.directory.server.core.DefaultDirectoryService.initialize(DefaultDirectoryService.java:1808)
> at 
> org.apache.directory.server.core.DefaultDirectoryService.startup(DefaultDirectoryService.java:1248)
> at 
> org.apache.hadoop.minikdc.MiniKdc.initDirectoryService(MiniKdc.java:383)
> at org.apache.hadoop.minikdc.MiniKdc.start(MiniKdc.java:319)
> at 
> org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.setupMiniKdc(TestSolrCloudWithKerberosAlt.java:105)
> at 
> org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.setUp(TestSolrCloudWithKerberosAlt.java:94)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11763:

Fix Version/s: (was: 8.x)
   8.1

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10199) Solr's Kerberos functionality does not work in Java9 due to dependency on hadoop's AuthenticationFilter which attempt access to JVM protected classes

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-10199:

Fix Version/s: (was: 8.x)
   8.1

> Solr's Kerberos functionality does not work in Java9 due to dependency on 
> hadoop's AuthenticationFilter which attempt access to JVM protected classes
> -
>
> Key: SOLR-10199
> URL: https://issues.apache.org/jira/browse/SOLR-10199
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Hoss Man
>Assignee: Kevin Risden
>Priority: Major
>  Labels: Java9
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-10199.patch
>
>
> (discovered this while working on test improvements for SOLR-8052)
> Our Kerberos based authn/authz features are all built on top of Hadoop's 
> {{AuthenticationFilter}} which in turn uses Hadoop's {{KerberosUtil}} -- but 
> this does not work on Java9/jigsaw JVMs because that class in turn attempts 
> to access {{sun.security.jgss.GSSUtil}} which is not exported by {{module 
> java.security.jgss}}
> This means that Solr users who depend on Kerberos will not be able to upgrade 
> to Java9, even if they do not use any Hadoop specific features of Solr.
> 
> Example log messages...
> {noformat}
>[junit4]   2> 6833 WARN  (qtp442059499-30) [] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: 
> java.lang.IllegalAccessException: class 
> org.apache.hadoop.security.authentication.util.KerberosUtil cannot access 
> class sun.security.jgss.GSSUtil (in module java.security.jgss) because module 
> java.security.jgss does not export sun.security.jgss to unnamed module 
> @4b38fe8b
>[junit4]   2> 6841 WARN  
> (TEST-TestSolrCloudWithKerberosAlt.testBasics-seed#[95A583AF82D1EBBE]) [] 
> o.a.h.c.p.ResponseProcessCookies Invalid cookie header: "Set-Cookie: 
> hadoop.auth=; Path=/; Domain=127.0.0.1; Expires=Ara, 01-Sa-1970 00:00:00 GMT; 
> HttpOnly". Invalid 'expires' attribute: Ara, 01-Sa-1970 00:00:00 GMT
> {noformat}
> (NOTE: HADOOP-14115 is cause of malformed cookie expiration)
> ultimately the client gets a 403 error (as seen in a testcase with patch from 
> SOLR-8052 applied and java9 assume commented out)...
> {noformat}
>[junit4] ERROR   7.10s | TestSolrCloudWithKerberosAlt.testBasics <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:34687/solr: Expected mime type 
> application/octet-stream but got text/html. 
>[junit4]> 
>[junit4]>  content="text/html;charset=ISO-8859-1"/>
>[junit4]> Error 403 
>[junit4]> 
>[junit4]> 
>[junit4]> HTTP ERROR: 403
>[junit4]> Problem accessing /solr/admin/collections. Reason:
>[junit4]> java.lang.IllegalAccessException: class 
> org.apache.hadoop.security.authentication.util.KerberosUtil cannot access 
> class sun.security.jgss.GSSUtil (in module java.security.jgss) because module 
> java.security.jgss does not export sun.security.jgss to unnamed module 
> @4b38fe8b
>[junit4]> http://eclipse.org/jetty";>Powered by Jetty:// 
> 9.3.14.v20161028
>[junit4]> 
>[junit4]> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793810#comment-16793810
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit c79aeee5f9a013c280a76a8d6b04bea63f212909 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c79aeee ]

SOLR-12923: tweak the randomization in testCreateLargeSimCollections to reduce 
the max possible totalCores

also decrease the number of iters while increase the cluster shape wait time to 
reduce the risk of spurious failures on machines under heavy contention w/o 
making the the test any slower on average


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13074) MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like crazy

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13074:

Fix Version/s: (was: 8.x)
   8.1

> MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like 
> crazy
> -
>
> Key: SOLR-13074
> URL: https://issues.apache.org/jira/browse/SOLR-13074
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Dawid Weiss
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13074.patch, SOLR-13074.patch
>
>
> This reproduces for me, always (Linux box):
> {code}
> ant test  -Dtestcase=MoveReplicaHDFSTest -Dtests.seed=DC1CE772C445A55D 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=fr 
> -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's the bug in Hadoop I discusse in SOLR-13060 -- one of the threads falls 
> into an endless loop when terminated (interrupted). Perhaps we should close 
> something cleanly and don't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13307) Ensure HDFS tests clear System properties they set

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13307:

Fix Version/s: (was: 8.x)
   8.1

> Ensure HDFS tests clear System properties they set
> --
>
> Key: SOLR-13307
> URL: https://issues.apache.org/jira/browse/SOLR-13307
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13307.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> While looking at SOLR-13297, found there are system properties that are not 
> cleared in HDFS tests. This can cause other HDFS tests in the same JVM to 
> have weird configs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-03-15 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793797#comment-16793797
 ] 

David Smiley commented on SOLR-11763:
-

Yeah something like that.  Here's what I suggest:
 # un-assign this version from those that are not Resolved or Closed
 # un-assign this version from those that are in 8.0.  Those issues should be 
marked Closed, BTW since 8.0 was released.
 # Assign those remaining to 8.1.  These will all be Resolved issues and we 
know they will be in 8.1.
 # Delete the version.

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13222) Improve logging in StreamingSolrClients

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13222:

Fix Version/s: (was: 8.x)
   8.1

> Improve logging in StreamingSolrClients
> ---
>
> Key: SOLR-13222
> URL: https://issues.apache.org/jira/browse/SOLR-13222
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Peter Cseh
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13222.patch
>
>
> The internal class of ErrorReportingConcurrentUpdateSolrClient 
>  logs the exception's [stack 
> trace|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/update/StreamingSolrClients.java#L113]
>  with the log message "error".
> Adding information about the request that belongs to the error helped us in 
> investigating intermittent issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13252) NPE trying to set autoscaling policy for existing collection

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13252:

Fix Version/s: (was: 8.x)

> NPE trying to set autoscaling policy for existing collection
> 
>
> Key: SOLR-13252
> URL: https://issues.apache.org/jira/browse/SOLR-13252
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.7, 8.0, 8.x, master (9.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-13252.patch
>
>
> Steps to reproduce:
> * create a collection without collection-specific policy, eg. {{test}}
> * define a collection-specific policy {{policy1}}:
> {code}
> POST http://localhost:8983/solr/admin/autoscaling
> {
> "set-policy": 
>   {
>   "policy1" :[
>   {"replica": "<2", "shard": "#EACH", "node": "#ANY"}
>   ]
>   }
> }
> {code}
> * try to modify the collection to use this policy
> {code}
> http://localhost:8983/solr/admin/collections?action=MODIFYCOLLECTION&collection=test&policy=policy1
> {code}
> A NullPointerException is thrown due to the previous value of the "policy" 
> property being absent:
> {code}
> 2019-02-14 18:48:17.007 ERROR 
> (OverseerThreadFactory-9-thread-5-processing-n:192.168.0.69:8983_solr) 
> [c:test   ] o.a.s.c.a.c.OverseerCollectionMessageHandler Collection: test 
> operation: modifycollection failed:java.lang.NullPointerException
> at 
> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.modifyCollection(OverseerCollectionMessageHandler.java:687)
> at 
> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:292)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:496)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8052) Tests using MiniKDC do not work with Java 9 Jigsaw

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8052:
---
Fix Version/s: (was: 8.x)

> Tests using MiniKDC do not work with Java 9 Jigsaw
> --
>
> Key: SOLR-8052
> URL: https://issues.apache.org/jira/browse/SOLR-8052
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication, Hadoop Integration
>Affects Versions: 5.3
>Reporter: Uwe Schindler
>Assignee: Kevin Risden
>Priority: Major
>  Labels: Java9
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-8052.patch, SOLR-8052.patch
>
>
> As described in my status update yesterday, there are some problems in 
> dependencies shipped with Solr that don't work with Java 9 Jigsaw builds.
> org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider
> {noformat}
>[junit4]> Throwable #1: java.lang.RuntimeException: 
> java.lang.IllegalAccessException: Class org.apache.hadoop.minikdc.MiniKdc can 
> not access a member of class sun.security.krb5.Config (module 
> java.security.jgss) with modifiers "public static", module java.security.jgss 
> does not export sun.security.krb5 to 
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:211)
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:81)
>[junit4]>at java.lang.Thread.run(java.base@9.0/Thread.java:746)
>[junit4]> Caused by: java.lang.IllegalAccessException: Class 
> org.apache.hadoop.minikdc.MiniKdc can not access a member of class 
> sun.security.krb5.Config (module java.security.jgss) with modifiers "public 
> static", module java.security.jgss does not export sun.security.krb5 to 
> 
>[junit4]>at 
> java.lang.reflect.AccessibleObject.slowCheckMemberAccess(java.base@9.0/AccessibleObject.java:384)
>[junit4]>at 
> java.lang.reflect.AccessibleObject.checkAccess(java.base@9.0/AccessibleObject.java:376)
>[junit4]>at 
> org.apache.hadoop.minikdc.MiniKdc.initKDCServer(MiniKdc.java:478)
>[junit4]>at 
> org.apache.hadoop.minikdc.MiniKdc.start(MiniKdc.java:320)
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:204)
>[junit4]>... 38 moreThrowable #2: 
> java.lang.NullPointerException
>[junit4]>at 
> org.apache.solr.cloud.ZkTestServer$ZKServerMain.shutdown(ZkTestServer.java:334)
>[junit4]>at 
> org.apache.solr.cloud.ZkTestServer.shutdown(ZkTestServer.java:526)
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.shutdown(SaslZkACLProviderTest.java:218)
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest.tearDown(SaslZkACLProviderTest.java:116)
>[junit4]>at java.lang.Thread.run(java.base@9.0/Thread.java:746)
> {noformat}
> This is really bad, bad, bad! All security related stuff should never ever be 
> reflected on!
> So we have to open issue in MiniKdc project so they remove the "hacks". 
> Elasticsearch had
> similar problems with Amazon's AWS API. The worked around with a funny hack 
> in their SecurityPolicy
> (https://github.com/elastic/elasticsearch/pull/13538). But as Solr does not 
> run with SecurityManager
> in production, there is no way to do that. 
> We should report issue on the MiniKdc project, so they fix their code and 
> remove the really bad reflection on Java's internal classes.
> FYI, my 
> [conclusion|http://mail-archives.apache.org/mod_mbox/lucene-dev/201509.mbox/%3C014801d0ee23%245c8f5df0%2415ae19d0%24%40thetaphi.de%3E]
>  from yesterday.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10210) Solr features based on Hadoop that do not work on Java9

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-10210:

Fix Version/s: (was: 8.x)

> Solr features based on Hadoop that do not work on Java9
> ---
>
> Key: SOLR-10210
> URL: https://issues.apache.org/jira/browse/SOLR-10210
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Hoss Man
>Assignee: Kevin Risden
>Priority: Major
>  Labels: Java9
> Fix For: 8.0, master (9.0)
>
>
> This issue will serve as a central tracking point / "blocker" for Solr issues 
> leveraging Hadoop code that does not work properly on java9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9515) Update to Hadoop 3

2019-03-15 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9515:
---
Fix Version/s: (was: 8.x)

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515-fix_pom.patch, 
> SOLR-9515-forbiddenapis-maven.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+8) - Build # 267 - Still Unstable!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/267/
Java: 64bit/jdk-13-ea+8 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.test

Error Message:
Error from server at http://127.0.0.1:40573/solr: no core retrieved for test

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at http://127.0.0.1:40573/solr: no core retrieved for test
at 
__randomizedtesting.SeedInfo.seed([AED96587FD9F127D:268D5A5D53637F85]:0)
at 
org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteExecutionException.create(BaseHttpSolrClient.java:66)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:626)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274)
at 
org.apache.solr.update.processor.RoutedAliasUpdateProcessorTest.createConfigSet(RoutedAliasUpdateProcessorTest.java:115)
at 
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.test(CategoryRoutedAliasUpdateProcessorTest.java:164)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.ja

[jira] [Commented] (LUCENE-8477) Improve handling of inner disjunctions in intervals

2019-03-15 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793794#comment-16793794
 ] 

Alan Woodward commented on LUCENE-8477:
---

Here is a proposal to fix this, using the new QueryVisitor API to work out if 
disjunctions have any sub-clauses with common first terms.  Given an interval 
{{BLOCK(a,or(BLOCK(b,c),b),d)}} we can ensure that all matches are collected by 
rewriting things so that the final clause {{d}} is moved inside the 
disjunction, yielding {{BLOCK(a,or(BLOCK(b,c,d),BLOCK(b,d)))}}.  Checking for 
common prefixes means that intervals of the form 
{{BLOCK(a,or(BLOCK(b,c),d),e)}} don't need to be rewritten, which will be more 
efficient when the query is run as we only need to iterate positions for the 
final term once.

> Improve handling of inner disjunctions in intervals
> ---
>
> Key: LUCENE-8477
> URL: https://issues.apache.org/jira/browse/LUCENE-8477
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8477.patch
>
>
> The current implementation of the disjunction interval produced by 
> {{Intervals.or}} is a direct implementation of the OR operator from the Vigna 
> paper.  This produces minimal intervals, meaning that (a) is preferred over 
> (a b), and (b) also over (a b).  This has advantages when it comes to 
> counting intervals for scoring, but also has drawbacks when it comes to 
> matching.  For example, a phrase query for ((a OR (a b)) BLOCK (c)) will not 
> match the document (a b c), because (a) will be preferred over (a b), and (a 
> c) does not match.
> This ticket is to discuss the best way of dealing with disjunctions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8477) Improve handling of inner disjunctions in intervals

2019-03-15 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8477:
--
Attachment: LUCENE-8477.patch

> Improve handling of inner disjunctions in intervals
> ---
>
> Key: LUCENE-8477
> URL: https://issues.apache.org/jira/browse/LUCENE-8477
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8477.patch
>
>
> The current implementation of the disjunction interval produced by 
> {{Intervals.or}} is a direct implementation of the OR operator from the Vigna 
> paper.  This produces minimal intervals, meaning that (a) is preferred over 
> (a b), and (b) also over (a b).  This has advantages when it comes to 
> counting intervals for scoring, but also has drawbacks when it comes to 
> matching.  For example, a phrase query for ((a OR (a b)) BLOCK (c)) will not 
> match the document (a b c), because (a) will be preferred over (a b), and (a 
> c) does not match.
> This ticket is to discuss the best way of dealing with disjunctions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-11) - Build # 98 - Failure!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/98/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testMaxCardinality

Error Message:
Error from server at http://127.0.0.1:53687/solr: no core retrieved for 
testMaxCardinality

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at http://127.0.0.1:53687/solr: no core retrieved for 
testMaxCardinality
at 
__randomizedtesting.SeedInfo.seed([1C2649C3B80106BC:6DE7AB4CC9272A3A]:0)
at 
org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteExecutionException.create(BaseHttpSolrClient.java:66)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:626)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274)
at 
org.apache.solr.update.processor.RoutedAliasUpdateProcessorTest.createConfigSet(RoutedAliasUpdateProcessorTest.java:115)
at 
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testMaxCardinality(CategoryRoutedAliasUpdateProcessorTest.java:300)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$

[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-03-15 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793767#comment-16793767
 ] 

Kevin Risden commented on SOLR-11763:
-

You might be right. I thought there was an email about using "8.x" and it would 
be renamed to 8.1 later during the release. I can't seem to find that email 
right now.

 

FWIW there are quite a few 8.x JIRAs. Might be worth bulk change to 8.1 and 
delete the 8.x version?

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-03-15 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793758#comment-16793758
 ] 

David Smiley commented on SOLR-11763:
-

The Fix version here is wrong; it should point to a specific release this is 
landing in.  "8.x" should not exist as a version, I believe.

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8718) Add docValueCount support for SortedSetDocValues

2019-03-15 Thread John Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Wang resolved LUCENE-8718.
---
Resolution: Workaround

> Add docValueCount support for SortedSetDocValues
> 
>
> Key: LUCENE-8718
> URL: https://issues.apache.org/jira/browse/LUCENE-8718
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 7.7.1
>Reporter: John Wang
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> implement docValueCount method for SortedSetDocValues, see comment:
>  
> [https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/SortedSetDocValues.java#L54]
>  
> Patch/PR: https://github.com/apache/lucene-solr/pull/603



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] javasoze commented on issue #603: LUCENE-8718: Add docValueCount support for SortedSetDocValues

2019-03-15 Thread GitBox
javasoze commented on issue #603: LUCENE-8718: Add docValueCount support for 
SortedSetDocValues
URL: https://github.com/apache/lucene-solr/pull/603#issuecomment-473353943
 
 
   Work around presented by @jpountz , will close this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] javasoze closed pull request #603: LUCENE-8718: Add docValueCount support for SortedSetDocValues

2019-03-15 Thread GitBox
javasoze closed pull request #603: LUCENE-8718: Add docValueCount support for 
SortedSetDocValues
URL: https://github.com/apache/lucene-solr/pull/603
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8718) Add docValueCount support for SortedSetDocValues

2019-03-15 Thread John Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793753#comment-16793753
 ] 

John Wang commented on LUCENE-8718:
---

[~jpountz] Yes, this would yield the same result, nice! Thanks, will close this 
ticket and PR.

> Add docValueCount support for SortedSetDocValues
> 
>
> Key: LUCENE-8718
> URL: https://issues.apache.org/jira/browse/LUCENE-8718
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 7.7.1
>Reporter: John Wang
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> implement docValueCount method for SortedSetDocValues, see comment:
>  
> [https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/SortedSetDocValues.java#L54]
>  
> Patch/PR: https://github.com/apache/lucene-solr/pull/603



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7783 - Still Unstable!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7783/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly

Error Message:
Unexpected number of elements in the group for intGSL: 23 rsp: 
{responseHeader={zkConnected=true,status=0,QTime=7,params={q=*:*,_stateVer_=dv_coll:4,group.limit=100,rows=100,wt=javabin,version=2,group.field=intGSL,group=true}},grouped={intGSL={matches=59,groups=[{groupValue=1783414073,doclist={numFound=7,start=0,maxScore=1.0,docs=[SolrDocument{id=2,
 intGSL=1783414073, longGSL=414905050109898236, doubleGSL=10001.490985118584, 
floatGSL=10003.44, dateGSL=Sun Oct 26 19:11:40 AST 155147017, 
stringGSL=base_string_425784__00010009, boolGSL=true, 
sortableGSL=base_string_23585__00010003, _version_=1628081563176534016, 
_root_=2}, SolrDocument{id=3, intGSL=1783414073, longGSL=414905050109898236, 
doubleGSL=10001.490985118584, floatGSL=10003.44, dateGSL=Sun Oct 26 19:11:40 
AST 155147017, stringGSL=base_string_425784__00010009, boolGSL=false, 
sortableGSL=base_string_23585__00010003, _version_=1628081563187019776, 
_root_=3}, SolrDocument{id=6, intGSL=1783414073, longGSL=414905050109898236, 
doubleGSL=10001.490985118584, floatGSL=10003.44, dateGSL=Sun Oct 26 19:11:40 
AST 155147017, stringGSL=base_string_425784__00010009, boolGSL=true, 
sortableGSL=base_string_23585__00010003, _version_=1628081563187019777, 
_root_=6}, SolrDocument{id=1, intGSL=1783414073, longGSL=414905050109898236, 
doubleGSL=10001.490985118584, floatGSL=10003.44, dateGSL=Sun Oct 26 19:11:40 
AST 155147017, stringGSL=base_string_425784__00010009, boolGSL=false, 
sortableGSL=base_string_23585__00010003, _version_=1628081563182825472, 
_root_=1}, SolrDocument{id=5, intGSL=1783414073, longGSL=414905050109898236, 
doubleGSL=10001.490985118584, floatGSL=10003.44, dateGSL=Sun Oct 26 19:11:40 
AST 155147017, stringGSL=base_string_425784__00010009, boolGSL=false, 
sortableGSL=base_string_23585__00010003, _version_=1628081563182825472, 
_root_=5}, SolrDocument{id=0, intGSL=1783414073, longGSL=414905050109898236, 
doubleGSL=10001.490985118584, floatGSL=10003.44, dateGSL=Sun Oct 26 19:11:40 
AST 155147017, stringGSL=base_string_425784__00010009, boolGSL=true, 
sortableGSL=base_string_23585__00010003, _version_=1628081563181776896, 
_root_=0}, SolrDocument{id=4, intGSL=1783414073, longGSL=414905050109898236, 
doubleGSL=10001.490985118584, floatGSL=10003.44, dateGSL=Sun Oct 26 19:11:40 
AST 155147017, stringGSL=base_string_425784__00010009, boolGSL=true, 
sortableGSL=base_string_23585__00010003, _version_=1628081563190165504, 
_root_=4}]}}, 
{groupValue=null,doclist={numFound=23,start=0,maxScore=1.0,docs=[SolrDocument{id=10010,
 _version_=1628081563187019780, _root_=10010}, SolrDocument{id=10025, 
_version_=1628081563187019782, _root_=10025}, SolrDocument{id=10035, 
_version_=1628081563187019786, _root_=10035}, SolrDocument{id=10045, 
_version_=1628081563187019788, _root_=10045}, SolrDocument{id=8, 
intGSF=1695193457, longGSF=2090958993022667286, doubleGSF=20011.55841972542, 
floatGSF=20003.102, dateGSF=Sun Mar 21 09:55:22 AST 165522652, 
stringGSF=base_string_123401__00020004, boolGSF=true, 
sortableGSF=base_string_609606__00020004, _version_=1628081563030781952, 
_root_=8}, SolrDocument{id=10, intGSF=1695193457, longGSF=2090958993022667286, 
doubleGSF=20011.55841972542, floatGSF=20003.102, dateGSF=Sun Mar 21 09:55:22 
AST 165522652, stringGSF=base_string_123401__00020004, boolGSF=true, 
sortableGSF=base_string_609606__00020004, _version_=1628081563030781953, 
_root_=10}, SolrDocument{id=11, intGSF=1695193457, longGSF=2090958993022667286, 
doubleGSF=20011.55841972542, floatGSF=20003.102, dateGSF=Sun Mar 21 09:55:22 
AST 165522652, stringGSF=base_string_123401__00020004, boolGSF=false, 
sortableGSF=base_string_609606__00020004, _version_=1628081563030781954, 
_root_=11}, SolrDocument{id=13, intGSF=1695193457, longGSF=2090958993022667286, 
doubleGSF=20011.55841972542, floatGSF=20003.102, dateGSF=Sun Mar 21 09:55:22 
AST 165522652, stringGSF=base_string_123401__00020004, boolGSF=false, 
sortableGSF=base_string_609606__00020004, _version_=1628081563030781955, 
_root_=13}, SolrDocument{id=14, intGSF=1695203459, longGSF=2090958993022677286, 
doubleGSF=30018.55841972542, floatGSF=30010.102, dateGSF=Sun Mar 21 09:55:32 
AST 165522652, stringGSF=base_string_123401__00030012, boolGSF=true, 
sortableGSF=base_string_609606__00030006, _version_=1628081563030781956, 
_root_=14}, SolrDocument{id=10015, _version_=1628081563030781957, 
_root_=10015}, SolrDocument{id=10020, _version_=1628081563030781958, 
_root_=10020}, SolrDocument{id=24, intGSF=1695213463, 
longGSF=2090958993022687287, doubleGSF=40026.55841972542, floatGSF=40014.1, 
dateGSF=Sun Mar 21 09:55:42 AST 165522652, 
stringGSF=base_string_123401__00040018, boolGSF=true, 
sortableGSF=base_string_609606__00040015, _version_=1628081563030781959, 
_root_=24}, SolrDocument{id

[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 45 - Failure

2019-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/45/

363 tests failed.
FAILED:  org.apache.solr.CursorPagingTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.TestCrossCoreJoin.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.analysis.TestDeprecatedFilters.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.ClusterStateUpdateTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.DeleteReplicaTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.OverseerTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.RecoveryZkTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.RemoteQueryErrorTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.ReplaceNodeTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.ReplicationFactorTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.RestartWhileUpdatingTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.ShardRoutingTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  
org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.SliceStateTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.SolrCLIZkUtilsTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.SolrCloudExampleTest.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.TestCloudDeleteByQuery.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.TestCloudRecovery2.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.initializationError

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  org.apache.solr.cloud.TestDistribDocBasedVersion.initial

[jira] [Commented] (LUCENE-8150) Remove references to segments.gen.

2019-03-15 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793682#comment-16793682
 ] 

Uwe Schindler commented on LUCENE-8150:
---

I think this issue should be solved like Robert says: Just remove all the stuff 
that is used on writing to indexes (like IndexFileDeleter, IndexWriter,...). 
But when opening an index it should still bring the right Exception 
(IndexTooOld). If we need to test for this file, we should still do this, but 
only in SegmentInfos.

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 266 - Unstable!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/266/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:+UseCompressedOops 
-XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic

Error Message:
{} expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: {} expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([5A5D1E690612CB4B:F1A7037CD9CE4D65]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic(SolrRrdBackendFactoryTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)




Build Log:
[...truncated 15065 lines...]
   [junit4] Suite: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
   [junit4]   2> 3114468 INFO  
(SUITE-SolrRrdBackendFactoryTest-seed#[5A5D1E690612CB4B]-worker) [] 
o.a.s.SolrTest

Re: Re: Authorization fails but api still renders

2019-03-15 Thread Branham, Jeremy (Experis)
// Adding the dev DL, as this may be a bug

Solr v7.7.0

I’m expecting the 401 on all the servers in all 3 clusters using the security 
configuration.
For example, when I access the core or collection APIs without authentication, 
it should return a 401.

On one of the servers, in one of the clusters, the authorization is completely 
ignored. The http response is 200 and the API returns results.
The other server in this cluster works properly, returning a 401 when the 
protected API is accessed without authentication.

Interesting notes –
- If I use the IP or FQDN to access the server, authorization works properly 
and a 401 is returned. It’s only when I use the short hostname to access the 
server, that the authorization is bypassed.
- On the broken server, a 401 is returned correctly when the ‘autoscaling 
suggestions’ api is accessed. This api uses a different resource path, which 
may be a clue to why the others fail.
  https://solr:8443/api/cluster/autoscaling/suggestions

Here is the security.json with sensitive data changed/removed –

{
"authentication":{
   "blockUnknown": false,
   "class":"solr.BasicAuthPlugin",
   "credentials":{
 "admin":"--REDACTED--",
 "reader":"--REDACTED--",
 "writer":"--REDACTED--"
   },
   "realm":"solr"
},
"authorization":{
   "class":"solr.RuleBasedAuthorizationPlugin",
   "permissions":[
 {"name":"security-edit", "role":"admin"},
 {"name":"security-read", "role":"admin"},
 {"name":"schema-edit", "role":"admin"},
 {"name":"config-edit", "role":"admin"},
 {"name":"core-admin-edit", "role":"admin"},
 {"name":"collection-admin-edit", "role":"admin"},
 {"name":"autoscaling-read", "role":"admin"},
 {"name":"autoscaling-write", "role":"admin"},
 {"name":"autoscaling-history-read", "role":"admin"},
 {"name":"read","role":"*"},
 {"name":"schema-read","role":"*"},
 {"name":"config-read","role":"*"},
 {"name":"collection-admin-read", "role":"*"},
 {"name":"core-admin-read","role":"*"},
 {"name":"update", "role":"write"},
 {"collection":null, "path":"/admin/info/system", "role":"admin"}
   ],
   "user-role":{
 "admin": "admin",
 "reader": "read",
 "writer": "write"
   }
}}


 
Jeremy Branham
jb...@allstate.com

On 3/14/19, 10:06 PM, "Zheng Lin Edwin Yeo"  wrote:

Hi,

Can't really catch your question. Are you facing the error 401 on all the
clusters or just one of them?

Also, which Solr version are you using?

Regards,
Edwin

On Fri, 15 Mar 2019 at 05:15, Branham, Jeremy (Experis) 
wrote:

> I’ve discovered the authorization works properly if I use the FQDN to
> access the Solr node, but the short hostname completely circumvents it.
> They are all internal server clusters, so I’m using self-signed
> certificates [the same exact certificate] on each. The SAN portion of the
> cert contains the IP, short, and FQDN of each server.
>
> I also diff’d the two servers Solr installation directories, and confirmed
> they are identical.
> They are using the same exact versions of Java and zookeeper, with the
> same chroot configuration. [different zk clusters]
>
>
> Jeremy Branham
> jb...@allstate.com
>
> On 3/14/19, 10:44 AM, "Branham, Jeremy (Experis)" 
> wrote:
>
> I’m using Basic Auth on 3 different clusters.
> On 2 of the clusters, authorization works fine. A 401 is returned when
> I try to access the core/collection apis.
>
> On the 3rd cluster I can see the authorization failed, but the api
> results are still returned.
>
> Solr.log
> 2019-03-14 09:25:47.680 INFO  (qtp1546693040-152) [   ]
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal.
> failed permission {
>   "name":"core-admin-read",
>   "role":"*"}
>
>
> I’m using different zookeeper clusters for each solr cluster, but
> using the same security.json contents.
> I’ve tried refreshing the ZK node, and bringing the whole Solr cluster
> down and back up.
>
> Is there some sort of caching that could be happening?
>
> I wrote an installation script that I’ve used to setup each cluster,
> so I’m thinking I’ll wipe it out and re-run.
> But before I do this, I thought I’d ask the community for input. Maybe
> a bug?
>
>
> Jeremy Branham
> jb...@allstate.com
> Allstate Insurance Company | UCV Technology Services | Information
> Services Group
>
>
>
>




[JENKINS] Lucene-Solr-repro - Build # 3024 - Unstable

2019-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3024/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1281/consoleText

[repro] Revision: 76babf876a49f82959cc36a1d7ef922a9c2dddff

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=TestSimLargeCluster 
-Dtests.method=testCreateLargeSimCollections -Dtests.seed=A3200D931DA1479D 
-Dtests.multiplier=2 -Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH 
-Dtests.timezone=US/Pacific -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
8f29d1eaadce5ca2c79b1f48161a8dda696d9952
[repro] git fetch
[repro] git checkout 76babf876a49f82959cc36a1d7ef922a9c2dddff

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimLargeCluster
[repro] ant compile-test

[...truncated 3565 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestSimLargeCluster" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=A3200D931DA1479D -Dtests.multiplier=2 
-Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH -Dtests.timezone=US/Pacific 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 142637 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
[repro] git checkout 8f29d1eaadce5ca2c79b1f48161a8dda696d9952

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8150) Remove references to segments.gen.

2019-03-15 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793604#comment-16793604
 ] 

Adrien Grand commented on LUCENE-8150:
--

I could work around it now that I know of this trap, but if it could be fixed, 
that would be even better. :)

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8150) Remove references to segments.gen.

2019-03-15 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793599#comment-16793599
 ] 

Michael McCandless commented on LUCENE-8150:


{quote}I think it's due to the fact that I'm always filtering by open issues on 
jirasearch, and it filters out issues that are marked as "patch available"
{quote}
Oh no!  Sorry :)  I will try to fix this.  Clearly 
[http://jirasearch.mikemccandless.com|http://jirasearch.mikemccandless.com/] is 
buggy here ... it seems to think issues that have patches are resolved?

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Windows (64bit/jdk-12-ea+32) - Build # 97 - Unstable!

2019-03-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/97/
Java: 64bit/jdk-12-ea+32 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
Expected metric minimums for prefix SECURITY./authentication.: 
{failMissingCredentials=2, authenticated=19, passThrough=9, 
failWrongCredentials=1, requests=31, errors=0}, but got: 
{failMissingCredentials=2, authenticated=18, passThrough=10, totalTime=8858000, 
failWrongCredentials=1, requestTimes=1650, requests=31, errors=0}

Stack Trace:
java.lang.AssertionError: Expected metric minimums for prefix 
SECURITY./authentication.: {failMissingCredentials=2, authenticated=19, 
passThrough=9, failWrongCredentials=1, requests=31, errors=0}, but got: 
{failMissingCredentials=2, authenticated=18, passThrough=10, totalTime=8858000, 
failWrongCredentials=1, requestTimes=1650, requests=31, errors=0}
at 
__randomizedtesting.SeedInfo.seed([CE89A9034EDD1BC2:72E7DF11EA8E98B8]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:125)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:81)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:306)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailur

[jira] [Commented] (LUCENE-8150) Remove references to segments.gen.

2019-03-15 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793597#comment-16793597
 ] 

Adrien Grand commented on LUCENE-8150:
--

Indeed! I think it's due to the fact that I'm always filtering by open issues 
on jirasearch, and it filters out issues that are marked as "patch available". 
I'll bring the patch up-to-date.

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 3022 - Unstable

2019-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3022/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/44/consoleText

[repro] Revision: cedff86aaaee70a28bd56372666b88f21381c975

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=StressHdfsTest -Dtests.method=test 
-Dtests.seed=65EE1590E4FFB3D1 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-SD -Dtests.timezone=America/Argentina/Buenos_Aires 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
8f29d1eaadce5ca2c79b1f48161a8dda696d9952
[repro] git fetch
[repro] git checkout cedff86aaaee70a28bd56372666b88f21381c975

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   StressHdfsTest
[repro] ant compile-test

[...truncated 3575 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.StressHdfsTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=65EE1590E4FFB3D1 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-SD -Dtests.timezone=America/Argentina/Buenos_Aires 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 37739 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.hdfs.StressHdfsTest
[repro] git checkout 8f29d1eaadce5ca2c79b1f48161a8dda696d9952

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8150) Remove references to segments.gen.

2019-03-15 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793595#comment-16793595
 ] 

Michael McCandless commented on LUCENE-8150:


Hmm it looks like this was never committed?

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12120) New plugin type AuditLoggerPlugin

2019-03-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793591#comment-16793591
 ] 

Jan Høydahl commented on SOLR-12120:


Another push to PR, making the interface {{@lucene.experimental}}

> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13244) Nodes view fails when a node is temporarily down

2019-03-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-13244.

Resolution: Fixed

> Nodes view fails when a node is temporarily down
> 
>
> Key: SOLR-13244
> URL: https://issues.apache.org/jira/browse/SOLR-13244
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
> Attachments: solr13244.png
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The Cloud->Nodes view lists all nodes, grouped by host. However, if a node is 
> temporarily down (not in live_nodes), the whole screen is broken, and instead 
> error messages like
> {noformat}
> Requested node example.com:8983_solr is not part of cluster{noformat}
> This happens on the Ajax request to fetch {{admin/metrics}} and 
> {{admin/info/system}}. A better approach would be to skip requesting metrics 
> for downed nodes but still display them in the list, perhaps with a DOWN 
> label or other background colour, to clearly tell that the node is configured 
> but not live.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1281 - Still Failing

2019-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1281/

No tests ran.

Build Log:
[...truncated 23440 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: WARNING: aliases.adoc: line 25: list item 
index: expected 2, got 1
[asciidoctor:convert] asciidoctor: WARNING: aliases.adoc: line 26: list item 
index: expected 3, got 1
[asciidoctor:convert] asciidoctor: WARNING: aliases.adoc: line 224: list item 
index: expected 2, got 1
[asciidoctor:convert] asciidoctor: WARNING: aliases.adoc: line 225: list item 
index: expected 3, got 1
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: WARNING: aliases.adoc: line 25: list item 
index: expected 2, got 1
[asciidoctor:convert] asciidoctor: WARNING: aliases.adoc: line 26: list item 
index: expected 3, got 1
[asciidoctor:convert] asciidoctor: WARNING: aliases.adoc: line 224: list item 
index: expected 2, got 1
[asciidoctor:convert] asciidoctor: WARNING: aliases.adoc: line 225: list item 
index: expected 3, got 1
 [java] Processed 2513 links (2057 relative) to 3331 anchors in 252 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-

[jira] [Commented] (SOLR-13244) Nodes view fails when a node is temporarily down

2019-03-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16793588#comment-16793588
 ] 

ASF subversion and git services commented on SOLR-13244:


Commit 4540fa427a4dfaf3ae7947c6aa9b6d6d456e43cc in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4540fa4 ]

SOLR-13244: Nodes view fails when a node is temporarily down

(cherry picked from commit 8f29d1eaadce5ca2c79b1f48161a8dda696d9952)


> Nodes view fails when a node is temporarily down
> 
>
> Key: SOLR-13244
> URL: https://issues.apache.org/jira/browse/SOLR-13244
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
> Attachments: solr13244.png
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The Cloud->Nodes view lists all nodes, grouped by host. However, if a node is 
> temporarily down (not in live_nodes), the whole screen is broken, and instead 
> error messages like
> {noformat}
> Requested node example.com:8983_solr is not part of cluster{noformat}
> This happens on the Ajax request to fetch {{admin/metrics}} and 
> {{admin/info/system}}. A better approach would be to skip requesting metrics 
> for downed nodes but still display them in the list, perhaps with a DOWN 
> label or other background colour, to clearly tell that the node is configured 
> but not live.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >