[jira] [Comment Edited] (SOLR-12361) Change _childDocuments to Map

2018-05-23 Thread mosh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488429#comment-16488429
 ] 

mosh edited comment on SOLR-12361 at 5/24/18 5:48 AM:
--

{quote}Lets say that docs can have semantic child relations (named child 
relations), *OR* (XOR?) they can have anonymous ones – what we have today.
{quote}
 Just to make sure we're on the same page [~dsmiley].
Do you propose we deprecate _childDocuments for the time being, leave it as is, 
while implementing child docs as values?
Later on(Solr 8.0) the _childDocuments will be removed?
Plus, would we have to enforce that no childDocument is inserted inside the 
_childDocuments_ key annonymous childDocuments were inserted to the current doc?

 


was (Author: moshebla):
{quote}Lets say that docs can have semantic child relations (named child 
relations), *OR* (XOR?) they can have anonymous ones – what we have today.
{quote}
 Just to make sure we're on the same page [~dsmiley].
Do you propose we deprecate _childDocuments for the time being, leave it as is, 
while implementing child docs as values?
Later on(Solr 8.0) the _childDocuments will be removed?

 

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-23 Thread mosh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488462#comment-16488462
 ] 

mosh commented on SOLR-12361:
-

{quote}Also, I wonder if we even need to directly "flatten" the tree to a List 
after all? Consider your change to DocumentBuilder; maybe it should just 
operate recursively? Granted DocumentBuilder would then need to return a 
List but whatever.{quote}
This could be changed if it is deemed as a better approach.

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488458#comment-16488458
 ] 

Noble Paul commented on SOLR-12294:
---

This is a known issue. It will be fixed in the next release. You can use the 
workaround I have given in the comment

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above Exception.
> Sometimes you are lucky and the test1 collection will be initialize after the 
> .system collection. But in ~90% of the time this isn't the case...
> Let me know if you need further details here,
>  
> Johannes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-11-ea+14) - Build # 41 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/41/
Java: 64bit/jdk-11-ea+14 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SSLMigrationTest.test

Error Message:
Replica didn't have the proper urlScheme in the ClusterState

Stack Trace:
java.lang.AssertionError: Replica didn't have the proper urlScheme in the 
ClusterState
at 
__randomizedtesting.SeedInfo.seed([83C69C7239E65E12:B92A3A8971A33EA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SSLMigrationTest.assertReplicaInformation(SSLMigrationTest.java:104)
at 
org.apache.solr.cloud.SSLMigrationTest.testMigrateSSL(SSLMigrationTest.java:97)
at org.apache.solr.cloud.SSLMigrationTest.test(SSLMigrationTest.java:61)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-23 Thread mosh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488429#comment-16488429
 ] 

mosh commented on SOLR-12361:
-

{quote}Lets say that docs can have semantic child relations (named child 
relations), *OR* (XOR?) they can have anonymous ones – what we have today.
{quote}
 Just to make sure we're on the same page [~dsmiley].
Do you propose we deprecate _childDocuments for the time being, leave it as is, 
while implementing child docs as values?
Later on(Solr 8.0) the _childDocuments will be removed?

 

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8889) SolrCloud deleteById is broken when router.field is set

2018-05-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488408#comment-16488408
 ] 

Shawn Heisey commented on SOLR-8889:


[~jsp08], I've been following your thread on the mailing list.  Using 
deleteByQuery runs the risk of big pauses when merges happen, but I think it 
might avoid the problems described on this issue.

SOLR-5890 seems to say that if you explicitly send a deleteById request to a 
core that's a replica of the correct shard, rather than sending it to the 
collection, and if all IDs in the delete are on that shard, deleteById might 
work.  (For SolrJ, that would require using HttpSolrClient rather than 
CloudSolrClient)

The actual change for SOLR-5890 looks promising (using the \_route\_ 
parameter), but then SOLR-7384 appears to indicate that it doesn't actually 
work correctly.

More than one of the issues touching on this problem has mentioned broadcasting 
the delete to all shards as a workaround or a solution, but it seems that this 
hasn't actually been implemented.  One idea, which I admit is a drastic notion, 
is to implement a parameter that would force deleteById on SolrCloud to send 
the delete to all shards.

> SolrCloud deleteById is broken when router.field is set
> ---
>
> Key: SOLR-8889
> URL: https://issues.apache.org/jira/browse/SOLR-8889
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR_8889_investigation.patch
>
>
> If you set router.field on your collection to shard by something other than 
> the ID, then deleting documents by ID fails some of the time (how much 
> depends on how sharded the collection is).  I suspect that it'd work if the 
> IDs provided when deleting by ID were prefixed using the composite key syntax 
> -- "routekey!id" though I didn't check.  This is terrible.  Internally Solr 
> should broadcast to all the shards if there is no composite key prefix.
> Some affected code is UpdateRequest.getRoutes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-23 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488401#comment-16488401
 ] 

David Smiley commented on SOLR-12361:
-

I looked closer at what it would take to remove the direct flattening.  
AddUpdateCommand would no longer implement Iterable.  
DirectUpdateHandler2.allowDuplicateUpdate currently grabs this iterable and 
passes to Lucene IndexWriter.  I think it could instead grab a List 
from the command, after which it could then check the size for > 1 and add one 
document or the set of them atomically (no need for cmd.isBlock).  Maybe I'll 
look at this more tomorrow.  So this could remove the isBlock call which is 
potentially expensive'ish.  We could even do this and leave DocumentBuilder 
alone; keep the flatten.  Shrug; eh; debatable... 

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4654 - Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4654/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
last stage: inconsistent endOffset at pos=21: 67 vs 71; token=lnhsmof ypvaz dbh

Stack Trace:
java.lang.IllegalStateException: last stage: inconsistent endOffset at pos=21: 
67 vs 71; token=lnhsmof ypvaz dbh
at 
__randomizedtesting.SeedInfo.seed([4B2538DE118B4F1E:217E87CF48C56FED]:0)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:122)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:746)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:657)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:559)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:882)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
last stage: inconsistent endOffset at pos=21: 67 vs 71; token=lnhsmof ypvaz 

[jira] [Resolved] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-23 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-12378.

   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

Thanks all!

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488389#comment-16488389
 ] 

ASF subversion and git services commented on SOLR-12378:


Commit 7f0b184c66d501e45f33ae8a52ba4603725d39f0 in lucene-solr's branch 
refs/heads/branch_7x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7f0b184 ]

SOLR-12378: Support missing versionField on indexed docs in 
DocBasedVersionConstraintsURP.


> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Create a good SolrClient for SolrCloud paving the way for async requests, HTTP2, multiplexing, and the latest & greatest Jetty features.

2018-05-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488388#comment-16488388
 ] 

Mark Miller commented on SOLR-12297:


There is probably work there around SSL as well. Those are the major things I 
have to do differently in JettySolrRunner.

> Create a good SolrClient for SolrCloud paving the way for async requests, 
> HTTP2, multiplexing, and the latest & greatest Jetty features.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Create a good SolrClient for SolrCloud paving the way for async requests, HTTP2, multiplexing, and the latest & greatest Jetty features.

2018-05-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488386#comment-16488386
 ] 

Mark Miller commented on SOLR-12297:


Sounds good Shawn - looking at those issues, I don't anticipate too much 
clashing.

One thing that may interest you that I would love help on is configuring our 
Jetty instance for Http/2 as well as Http/1.1. Currently I'm just setting 
everything up for JettySolrRunner and our core tests.

> Create a good SolrClient for SolrCloud paving the way for async requests, 
> HTTP2, multiplexing, and the latest & greatest Jetty features.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-23 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488384#comment-16488384
 ] 

David Smiley commented on SOLR-12361:
-

I mostly finished reviewing #382 (removal of \_childDocuments) but I ought to 
look a bit more; plus I need to look at your #2 path.  As I was looking at 
#382, I kept thinking of back-compat ramifications, and then an epiphany hit 
me:  Lets say that docs can have semantic child relations (named child 
relations), *OR* (XOR?) they can have anonymous ones -- what we have today.  
The latter (anonymous), lets say we retain this for now in 7x (keep 
\_childDocuments impl) but come 8.0 we remove it.  For now just leave it and we 
deal with child docs potentially in both places.  In this scheme, 
doc.getChildDocuments only returns anonymous children (impl doesn't change).  
We can't change the name unfortunately but we can add javadocs that scream this 
point and potentially mark itself deprecated.  With this path, the main thing 
to concern ourselves with right now is simply supporting SolrInputDocument as 
field values and not worrying about disturbing back-compat.  This means we also 
needn't worry about, say, your change to ClientUtils.writeXML since child docs 
can come from either place and it's okay (no duplication of same docs to worry 
about).

Also, I wonder if we even need to directly "flatten" the tree to a List after 
all?  Consider your change to DocumentBuilder; maybe it should just operate 
recursively?  Granted DocumentBuilder would then need to return a 
List but whatever.

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12247) NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor to fire on first run!

2018-05-23 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488383#comment-16488383
 ] 

Erick Erickson commented on SOLR-12247:
---

I won't BadApple this test tomorrow then.

> NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor 
> to fire on first run!
> ---
>
> Key: SOLR-12247
> URL: https://issues.apache.org/jira/browse/SOLR-12247
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
>Priority: Major
>
> 100% reproducing seed from 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/203/]:
> {noformat}
> Checking out Revision 1b5690203de6d529f1eda671f84d710abd561bea 
> (refs/remotes/origin/branch_7x)
> [...]
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState 
> -Dtests.seed=B9D447011147FCB6 -Dtests.multiplier=2 -Dtests.locale=fr-BE 
> -Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[smoker][junit4] FAILURE 3.38s J2 | 
> NodeAddedTriggerTest.testRestoreState <<<
>[smoker][junit4]> Throwable #1: java.lang.AssertionError: Did not 
> expect the processor to fire on first run! event={
>[smoker][junit4]>   
> "id":"16bf1f58bda2d8Ta3xzeiz95jejbcrchofogpdj2",
>[smoker][junit4]>   "source":"node_added_trigger",
>[smoker][junit4]>   "eventTime":6402590841348824,
>[smoker][junit4]>   "eventType":"NODEADDED",
>[smoker][junit4]>   "properties":{
>[smoker][junit4]> "eventTimes":[6402590841348824],
>[smoker][junit4]> "nodeNames":["127.0.0.1:40637_solr"]}}
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B9D447011147FCB6:777AE392E97E84A0]:0)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
>[smoker][junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[smoker][junit4]   2> NOTE: test params are: 
> codec=Asserting(Lucene70), sim=RandomSimilarity(queryNorm=true): {}, 
> locale=fr-BE, timezone=MIT
>[smoker][junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle 
> Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=70702960,total=428867584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-BadApples-master-Linux (64bit/jdk-11-ea+14) - Build # 41 - Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/41/
Java: 64bit/jdk-11-ea+14 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SSLMigrationTest.test

Error Message:
Replica didn't have the proper urlScheme in the ClusterState

Stack Trace:
java.lang.AssertionError: Replica didn't have the proper urlScheme in the 
ClusterState
at 
__randomizedtesting.SeedInfo.seed([1F895AD6ABF306EB:97DD650C050F6B13]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SSLMigrationTest.assertReplicaInformation(SSLMigrationTest.java:104)
at 
org.apache.solr.cloud.SSLMigrationTest.testMigrateSSL(SSLMigrationTest.java:97)
at org.apache.solr.cloud.SSLMigrationTest.test(SSLMigrationTest.java:61)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[GitHub] lucene-solr pull request #382: WIP: SOLR-12361

2018-05-23 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/382#discussion_r190457103
  
--- Diff: solr/solrj/src/java/org/apache/solr/common/SolrDocument.java ---
@@ -388,20 +386,47 @@ public void 
addChildDocuments(Collection children) {
  }
}
 
+   @Override
+   public Map getChildDocumentsMap() {
+ Map childDocs = new HashMap<>();
+ for (Entry entry: _fields.entrySet()) {
+   Object value = entry.getValue();
+   if(objIsDocument(value)) {
+ childDocs.put(entry.getKey(), value);
+   }
+ }
+ return childDocs;
+   }
+
/** Returns the list of child documents, or null if none. */
@Override
public List getChildDocuments() {
- return _childDocuments;
+ List childDocs = new ArrayList<>();
+ Stream> fields = 
_fields.entrySet().stream()
+ .filter(value -> value.getValue() instanceof SolrInputDocument)
--- End diff --

Or the value might be a List of SolrInputDocument, so we have to check for 
that too; right?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #382: WIP: SOLR-12361

2018-05-23 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/382#discussion_r190457184
  
--- Diff: solr/solrj/src/java/org/apache/solr/common/SolrDocument.java ---
@@ -388,20 +386,47 @@ public void 
addChildDocuments(Collection children) {
  }
}
 
+   @Override
+   public Map getChildDocumentsMap() {
+ Map childDocs = new HashMap<>();
+ for (Entry entry: _fields.entrySet()) {
+   Object value = entry.getValue();
+   if(objIsDocument(value)) {
+ childDocs.put(entry.getKey(), value);
+   }
+ }
+ return childDocs;
+   }
+
/** Returns the list of child documents, or null if none. */
@Override
public List getChildDocuments() {
- return _childDocuments;
+ List childDocs = new ArrayList<>();
+ Stream> fields = 
_fields.entrySet().stream()
+ .filter(value -> value.getValue() instanceof SolrInputDocument)
+ .map(value -> new AbstractMap.SimpleEntry<>(value.getKey(), 
(SolrDocument) value.getValue()));
+ fields.forEach(e -> childDocs.add(e.getValue()));
+ return childDocs.size() > 0 ? childDocs: null;
}

@Override
public boolean hasChildDocuments() {
--- End diff --

We'd probably deprecate these if we go with this overall approach.  It's 
too much internal cost to call this method.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #382: WIP: SOLR-12361

2018-05-23 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/382#discussion_r190386655
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/JSONResponseWriter.java ---
@@ -376,20 +376,6 @@ public void writeSolrDocument(String name, 
SolrDocument doc, ReturnFields return
 writeVal(fname, val);
   }
 }
-
--- End diff --

Good; this can be repeated in GeoJSONResponseWriter too, I think.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12394) Remove managmentPath

2018-05-23 Thread Gus Heck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-12394:

Attachment: SOLR-12394.patch

> Remove managmentPath 
> -
>
> Key: SOLR-12394
> URL: https://issues.apache.org/jira/browse/SOLR-12394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Gus Heck
>Priority: Minor
> Attachments: SOLR-12394.patch
>
>
> NodeConfig has a property called managmentPath config which doesn't appear to 
> serve any coherent function and is in fact documented in 
> [https://lucene.apache.org/solr/guide/7_3/format-of-solr-xml.html] as:
> {quote}{{managementPath}}
> Currently non-operational.
> {quote}
> The code appears to have been added initially in SOLR-695, and that ticket 
> appears to relate to an elimination of a special case for single core 
> configurations. It seems that this may have been an attempt to support single 
> cores that had no name (a legacy mode of operation I guess, but before my 
> time) and yet still allow such single core setups to later have additional 
> cores added?
> So this ticket is a suggestion that we remove this configuration that 
> allegedly isn't working anyway, OR we make it work and give it good clear 
> documentation in code and in the ref guide so that folks don't have to waste 
> a lot of time figuring out what it does(n't do) to understand the code.
> Attaching patch to remove it. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12394) Remove managmentPath

2018-05-23 Thread Gus Heck (JIRA)
Gus Heck created SOLR-12394:
---

 Summary: Remove managmentPath 
 Key: SOLR-12394
 URL: https://issues.apache.org/jira/browse/SOLR-12394
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Gus Heck


NodeConfig has a property called managmentPath config which doesn't appear to 
serve any coherent function and is in fact documented in 
[https://lucene.apache.org/solr/guide/7_3/format-of-solr-xml.html] as:
{quote}{{managementPath}}

Currently non-operational.
{quote}
The code appears to have been added initially in SOLR-695, and that ticket 
appears to relate to an elimination of a special case for single core 
configurations. It seems that this may have been an attempt to support single 
cores that had no name (a legacy mode of operation I guess, but before my time) 
and yet still allow such single core setups to later have additional cores 
added?

So this ticket is a suggestion that we remove this configuration that allegedly 
isn't working anyway, OR we make it work and give it good clear documentation 
in code and in the ref guide so that folks don't have to waste a lot of time 
figuring out what it does(n't do) to understand the code.

Attaching patch to remove it. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8326) More Like This Params Refactor

2018-05-23 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488345#comment-16488345
 ] 

Robert Muir commented on LUCENE-8326:
-

First looking at the API change, it would be good to understand the goals. 

This change wraps 8 or 9 existing setters such as {{setMinTermLen}} with a 
"configuration class". There is also another class related to boosts. But 
everything is still just as mutable as before, so from my perspective it only 
adds additional indirection/abstraction which is undesired.

If we want to make MLT immutable or something like that, we should first figure 
out if that's worth it. From my perspective, I'm not sold on this for 
MoreLikeThis itself, since its lightweight and stateless, and since I can't see 
a way for MoreLikeThisQuery to cache efficiently.

On the other hand MoreLikeThisQuery is kind of a mess, but that isn't addressed 
with the refactoring. Really all queries should be immutable for caching 
purposes, and should all have correct equals/hashcode: but seems like its a 
lost cause with MoreLikeThisQuery since it does strange stuff in rewrite: its 
not really a per-segment thing. Because of how the query works, its not obvious 
to me if/how we can improve it with immutability...

Also currently MoreLikeThisQuery doesn't accept MoreLikeThis as a parameter or 
anything, and only uses it internally. So as it stands (also with this patch) 
it still has a "duplicate" API of all the parameters, which isn't great.

So I think if we want to change the API for this stuff, we should figure out 
what the goals are? If its just to say, consolidate api between MoreLikeThis 
and MoreLikeThisQuery, I can buy into that (although I have never used the 
latter myself, only the former). However the other queries use builders for 
such purposes so that's probably something to consider.

For the solr changes, my only comment would be that instead of running actual 
queries, isn't it good enough to just test that XYZ configuration produces a 
correct MLT object? Otherwise the test seems fragile from my perspective.

> More Like This Params Refactor
> --
>
> Key: LUCENE-8326
> URL: https://issues.apache.org/jira/browse/LUCENE-8326
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8326.patch, LUCENE-8326.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> More Like This ca be refactored to improve the code readability, test 
> coverage and maintenance.
> Scope of this Jira issue is to start the More Like This refactor from the 
> More Like This Params.
> This Jira will not improve the current More Like This but just keep the same 
> functionality with a refactored code.
> Other Jira issues will follow improving the overall code readability, test 
> coverage and maintenance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12247) NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor to fire on first run!

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488324#comment-16488324
 ] 

ASF subversion and git services commented on SOLR-12247:


Commit bf79ac6ffdb87fb90bcb4fe9199e099eb24ceb0e in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bf79ac6 ]

SOLR-12247: Ensure an event will contains newly added node


> NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor 
> to fire on first run!
> ---
>
> Key: SOLR-12247
> URL: https://issues.apache.org/jira/browse/SOLR-12247
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
>Priority: Major
>
> 100% reproducing seed from 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/203/]:
> {noformat}
> Checking out Revision 1b5690203de6d529f1eda671f84d710abd561bea 
> (refs/remotes/origin/branch_7x)
> [...]
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState 
> -Dtests.seed=B9D447011147FCB6 -Dtests.multiplier=2 -Dtests.locale=fr-BE 
> -Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[smoker][junit4] FAILURE 3.38s J2 | 
> NodeAddedTriggerTest.testRestoreState <<<
>[smoker][junit4]> Throwable #1: java.lang.AssertionError: Did not 
> expect the processor to fire on first run! event={
>[smoker][junit4]>   
> "id":"16bf1f58bda2d8Ta3xzeiz95jejbcrchofogpdj2",
>[smoker][junit4]>   "source":"node_added_trigger",
>[smoker][junit4]>   "eventTime":6402590841348824,
>[smoker][junit4]>   "eventType":"NODEADDED",
>[smoker][junit4]>   "properties":{
>[smoker][junit4]> "eventTimes":[6402590841348824],
>[smoker][junit4]> "nodeNames":["127.0.0.1:40637_solr"]}}
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B9D447011147FCB6:777AE392E97E84A0]:0)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
>[smoker][junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[smoker][junit4]   2> NOTE: test params are: 
> codec=Asserting(Lucene70), sim=RandomSimilarity(queryNorm=true): {}, 
> locale=fr-BE, timezone=MIT
>[smoker][junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle 
> Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=70702960,total=428867584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12247) NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor to fire on first run!

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488323#comment-16488323
 ] 

ASF subversion and git services commented on SOLR-12247:


Commit 71ed5bafac92f3dd0e8ca4388f49f2c039a8db5b in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=71ed5ba ]

SOLR-12247: Ensure an event will contains newly added node


> NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor 
> to fire on first run!
> ---
>
> Key: SOLR-12247
> URL: https://issues.apache.org/jira/browse/SOLR-12247
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
>Priority: Major
>
> 100% reproducing seed from 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/203/]:
> {noformat}
> Checking out Revision 1b5690203de6d529f1eda671f84d710abd561bea 
> (refs/remotes/origin/branch_7x)
> [...]
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState 
> -Dtests.seed=B9D447011147FCB6 -Dtests.multiplier=2 -Dtests.locale=fr-BE 
> -Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[smoker][junit4] FAILURE 3.38s J2 | 
> NodeAddedTriggerTest.testRestoreState <<<
>[smoker][junit4]> Throwable #1: java.lang.AssertionError: Did not 
> expect the processor to fire on first run! event={
>[smoker][junit4]>   
> "id":"16bf1f58bda2d8Ta3xzeiz95jejbcrchofogpdj2",
>[smoker][junit4]>   "source":"node_added_trigger",
>[smoker][junit4]>   "eventTime":6402590841348824,
>[smoker][junit4]>   "eventType":"NODEADDED",
>[smoker][junit4]>   "properties":{
>[smoker][junit4]> "eventTimes":[6402590841348824],
>[smoker][junit4]> "nodeNames":["127.0.0.1:40637_solr"]}}
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B9D447011147FCB6:777AE392E97E84A0]:0)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
>[smoker][junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[smoker][junit4]   2> NOTE: test params are: 
> codec=Asserting(Lucene70), sim=RandomSimilarity(queryNorm=true): {}, 
> locale=fr-BE, timezone=MIT
>[smoker][junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle 
> Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=70702960,total=428867584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8326) More Like This Params Refactor

2018-05-23 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488317#comment-16488317
 ] 

Robert Muir commented on LUCENE-8326:
-

Sorry, it may have been my fault. I checked "patch attached" when moving the 
issue to let the automated checks run

> More Like This Params Refactor
> --
>
> Key: LUCENE-8326
> URL: https://issues.apache.org/jira/browse/LUCENE-8326
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8326.patch, LUCENE-8326.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> More Like This ca be refactored to improve the code readability, test 
> coverage and maintenance.
> Scope of this Jira issue is to start the More Like This refactor from the 
> More Like This Params.
> This Jira will not improve the current More Like This but just keep the same 
> functionality with a refactored code.
> Other Jira issues will follow improving the overall code readability, test 
> coverage and maintenance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1967 - Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1967/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler: 
1) Thread[id=2720, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler] at 
java.base@9.0.4/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSQLHandler: 
   1) Thread[id=2720, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler]
at java.base@9.0.4/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([956D0D2614C9]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler: 
1) Thread[id=2713, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler] at 
java.base@9.0.4/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSQLHandler: 
   1) Thread[id=2713, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler]
at java.base@9.0.4/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([956D0D2614C9]:0)


FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:
--> https://127.0.0.1:44529/an/k/collection1_shard2_replica_n41:Failed to 
execute sqlQuery 'select id, field_i, str_s from collection1 where 
(text='()' OR text='') AND text='' order by field_i desc' against 
JDBC connection 'jdbc:calcitesolr:'. Error while executing SQL "select id, 
field_i, str_s from collection1 where (text='()' OR text='') AND 
text='' order by field_i desc": java.io.IOException: 
java.util.concurrent.ExecutionException: java.io.IOException: --> 
https://127.0.0.1:33933/an/k/collection1_shard2_replica_n45/:id must have 
DocValues to use this feature.

Stack Trace:
java.io.IOException: --> 
https://127.0.0.1:44529/an/k/collection1_shard2_replica_n41:Failed to execute 
sqlQuery 'select id, field_i, str_s from collection1 where (text='()' OR 
text='') AND text='' order by field_i desc' against JDBC connection 
'jdbc:calcitesolr:'.
Error while executing SQL "select id, field_i, str_s from collection1 where 
(text='()' OR text='') AND text='' order by field_i desc": 
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: --> 
https://127.0.0.1:33933/an/k/collection1_shard2_replica_n45/:id must have 
DocValues to use this feature.
at 
__randomizedtesting.SeedInfo.seed([956D0D2614C9:3033CDC9609D0770]:0)
at 
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:222)
at 
org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2522)
at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:124)
at org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:82)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 225 - Unstable

2018-05-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/225/

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
waitFor not elapsed but produced an event

Stack Trace:
java.lang.AssertionError: waitFor not elapsed but produced an event
at __randomizedtesting.SeedInfo.seed([63C7B07DDB40D00E:C86FF428FA323]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/89)={   
"replicationFactor":"2",   "pullReplicas":"0",   

[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-23 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488258#comment-16488258
 ] 

Lucene/Solr QA commented on LUCENE-8328:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} LUCENE-8328 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/lucene-java/HowToContribute#Contributing_your_work for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8328 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924661/LUCENE-8328.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/14/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch, LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.4) - Build # 7335 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7335/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=3558

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=3558
at 
__randomizedtesting.SeedInfo.seed([EE7FA09C14CFDEB5:D613D3B9801F7CF3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=14605000

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=14605000
at 

[jira] [Commented] (SOLR-9685) tag a query in JSON syntax

2018-05-23 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488253#comment-16488253
 ] 

Lucene/Solr QA commented on SOLR-9685:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
39s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m  7s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.sim.TestLargeCluster |
|   | solr.cloud.autoscaling.NodeAddedTriggerTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-9685 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924629/SOLR-9685.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 48bd259 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/104/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/104/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/104/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> tag a query in JSON syntax
> --
>
> Key: SOLR-9685
> URL: https://issues.apache.org/jira/browse/SOLR-9685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, JSON Request API
>Reporter: Yonik Seeley
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-9685.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There should be a way to tag a query/filter in JSON syntax.
> Perhaps these two forms could be equivalent:
> {code}
> "{!tag=COLOR}color:blue"
> { tagged : { COLOR : "color:blue" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488172#comment-16488172
 ] 

Hoss Man commented on SOLR-12386:
-

BTW...
{quote}IMHO, the file handle leak in the mentioned commit could habe been fixed 
by just using try-with-resources around Class(Loader)#getResourceAsStream() ...
{quote}
...that would have solved that particular leak, but it would not have fixed the 
error message returned in the event that the {{getResourceAsStream()}} _in that 
line of code_ returned 'null' down the road because some _other_ place in the 
code had a different resource leak ... we would still have the possibility of 
the caller throwing confusing {{invalid API spec: 
apispec/core.config.Commands.json}} exceptions because the method just returned 
'null' and it couldn't distinguish "resource doesn't exist" from "OS wouldn't 
let the JVM open that resource"

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488164#comment-16488164
 ] 

Hoss Man commented on SOLR-12386:
-

bq. Unfortunately there is no Class#getResources(), it's only on classloader.

Why does that matter? Doesn't {{Class.getResourceAsStream()}} just call 
{{Class.getClassLoader().getResourceAsStream()}} ?  we could still replace it 
with a helper utility like you're describing by passing 
{{Class.getClassLoader()}}

bq. But we should fix the underlying issue (the leaks first), then think about 
improving that situation.

My point before is that there may not actually *be* a leak -- it may very well 
be that all streams are getting closed properly, but that some tests are 
opening just enough resources that (depending on what other tests ran in the 
same JVM and what classes got loaded) they are hitting the ulimit for open 
files -- but instead of a clear error to that effect, we're getting "null" from 
{{getResourceAsStream()}}

ie: i agree with you that if there is a file handle leak we should fix it, but 
that is an indepenent possibility from the fact that we can/should "fix" the 
code we have which opens Resources to better report/propogate when we hit 
'IOException: Too many open files' under the covers so people aren't baffled 
and confused by "Can't find resource" for files that definitely exist.

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12265) Upgrade Jetty to 9.4.10

2018-05-23 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-12265.
--
Resolution: Fixed

> Upgrade Jetty to 9.4.10
> ---
>
> Key: SOLR-12265
> URL: https://issues.apache.org/jira/browse/SOLR-12265
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12265-shaded-jetty-start.patch, SOLR-12265.patch
>
>
> Solr 7.3 upgraded to Jetty 9.4.8 
> We're seeing this WARN very sporadically ( maybe one in every 100k requests ) 
> on the replica when indexing.
> {code:java}
> date time WARN [qtp768306356-580185] ? (:) - 
> java.nio.channels.ReadPendingException: null
> at org.eclipse.jetty.io.FillInterest.register(FillInterest.java:58) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractEndPoint.fillInterested(AbstractEndPoint.java:353)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractConnection.fillInterested(AbstractConnection.java:134)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) 
> ~[jetty-server-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:289) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:149) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0-zing_17.11.0.0]
> date time WARN [qtp768306356-580185] ? (:) - Read pending for 
> org.eclipse.jetty.server.HttpConnection$BlockingReadCallback@2e98df28 
> prevented AC.ReadCB@424271f8{HttpConnection@424271f8[p=HttpParser{s=START,0 
> of 
> -1},g=HttpGenerator@424273ae{s=START}]=>HttpChannelOverHttp@4242713d{r=141,c=false,a=IDLE,uri=null}<-DecryptedEndPoint@4242708d{/host:52824<->/host:port,OPEN,fill=FI,flush=-,to=1/86400}->HttpConnection@424271f8[p=HttpParser{s=START,0
>  of -1},g=HttpGenerator@424273ae{s=START}]=>{code}
> When this happens the leader basically waits till it get's a 
> SocketTimeoutException and then puts the replica into recovery.
> My motivation for upgrading to Jetty 9.4.9 is that the EatWhatYouKill was 
> introduced in Jetty 9.4.x . I don't believe we saw this error in Jetty 9.3.x 
> and then in Jetty 9.4.9 this class has undergone quite a few changes 
> [https://github.com/eclipse/jetty.project/commit/0cb4f5629dca082eec943b94ec8ef4ca0d5f1aa4#diff-ae450a12d4eca85a437bd5082f698f48]
>  . 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8889) SolrCloud deleteById is broken when router.field is set

2018-05-23 Thread Jay (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488144#comment-16488144
 ] 

Jay commented on SOLR-8889:
---

[~dsmiley], [~ichattopadhyaya]: is there a solution for this issue. I have a 
multisharded, implicity routing in my setup and also do lot of deletes. Is 
there an alternative to using deleteById. 

Tested  in both solr5.3 & solr 6.6.3.

Thanks

 

> SolrCloud deleteById is broken when router.field is set
> ---
>
> Key: SOLR-8889
> URL: https://issues.apache.org/jira/browse/SOLR-8889
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR_8889_investigation.patch
>
>
> If you set router.field on your collection to shard by something other than 
> the ID, then deleting documents by ID fails some of the time (how much 
> depends on how sharded the collection is).  I suspect that it'd work if the 
> IDs provided when deleting by ID were prefixed using the composite key syntax 
> -- "routekey!id" though I didn't check.  This is terrible.  Internally Solr 
> should broadcast to all the shards if there is no composite key prefix.
> Some affected code is UpdateRequest.getRoutes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488143#comment-16488143
 ] 

Uwe Schindler edited comment on SOLR-12386 at 5/23/18 10:46 PM:


bq. And now that i look at it again: if we used Enumeration 
getResources(String name) instead of URL getResource(String) wouldn't that 
ensure we would get an IOException instead of "null" in the case you're talking 
about where even the lookup of the name failed because of too many filehandles?

This could work if we create a static utility method taking Class or 
ClassLoader and a resource name, returning an InputStream. Unfortunately there 
is no Class#getResources(), it's only on classloader. We still have the problem 
outside of Solr's code that after running out of file handles, loading of 
resources fails - and sometimes fails with NPE (depending on the brokenness of 
code)!

IMHO, the file handle leak in the mentioned commit could habe been fixed by 
just using try-with-resources around Class(Loader)#getResourceAsStream(). But 
we should fix the underlying issue (the leaks first), then think about 
improving that situation.


was (Author: thetaphi):
bq. And now that i look at it again: if we used Enumeration 
getResources(String name) instead of URL getResource(String) wouldn't that 
ensure we would get an IOException instead of "null" in the case you're talking 
about where even the lookup of the name failed because of too many filehandles?

This could work if we create a static utility method taking Class or 
ClassLoader and a resource name, returning an InputStream. Unfortunately there 
is no Class#getResources(), it's only on classloader. We still have the problem 
outside of Solr's code that after running out of file handles, loading of 
resources fails - and sometimes fails with NPE!

IMHO, the file handle leak in the mentioned commit could habe been fixed by 
just using try-with-resources around Class(Loader)#getResourceAsStream(). But 
we should fix the underlying issue (the leaks first), then think about 
improving that situation.

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488143#comment-16488143
 ] 

Uwe Schindler commented on SOLR-12386:
--

bq. And now that i look at it again: if we used Enumeration 
getResources(String name) instead of URL getResource(String) wouldn't that 
ensure we would get an IOException instead of "null" in the case you're talking 
about where even the lookup of the name failed because of too many filehandles?

This could work if we create a static utility method taking Class or 
ClassLoader and a resource name, returning an InputStream. Unfortunately there 
is no Class#getResources(), it's only on classloader. We still have the problem 
outside of Solr's code that after running out of file handles, loading of 
resources fails - and sometimes fails with NPE!

IMHO, the file handle leak in the mentioned commit could habe been fixed by 
just using try-with-resources around Class(Loader)#getResourceAsStream(). But 
we should fix the underlying issue (the leaks first), then think about 
improving that situation.

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488133#comment-16488133
 ] 

Hoss Man commented on SOLR-12386:
-

bq. GetResource() may also return null ...

But at least then you can tell the difference between "getResource() returned 
null meaning we didn't locate the file resource name" and "openConnection() 
threw 'IOException: Too many open files'"

And now that i look at it again: if we used {{Enumeration 
getResources(String name)}} instead of {{URL getResource(String)}} wouldn't 
that ensure we would get an IOException instead of "null" in the case you're 
talking about where even the lookup of the name failed because of too many 
filehandles?

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2542 - Unstable

2018-05-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2542/

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode

Error Message:
unexpected DELETENODE status: 
{responseHeader={status=0,QTime=17},status={state=notfound,msg=Did not find 
[search_rate_trigger3/218b7c52c2583aTdpf5box5uhk9dvhokzp6ywnm8/0] in any tasks 
queue}}

Stack Trace:
java.lang.AssertionError: unexpected DELETENODE status: 
{responseHeader={status=0,QTime=17},status={state=notfound,msg=Did not find 
[search_rate_trigger3/218b7c52c2583aTdpf5box5uhk9dvhokzp6ywnm8/0] in any tasks 
queue}}
at 
__randomizedtesting.SeedInfo.seed([E150F31FDF8F2D5E:C3C23D9DE845A223]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.lambda$testDeleteNode$6(SearchRateTriggerIntegrationTest.java:668)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode(SearchRateTriggerIntegrationTest.java:660)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

Re: Management Path?

2018-05-23 Thread Mark Miller
It's under documented, under tested and SolrCloud doesn't like configurable
end points like this. It's also documented as non operational. +1 on
removing.

- Mark

On Wed, May 23, 2018 at 4:56 PM Gus Heck  wrote:

> I was just trying to understand the role of ShardHandler, and tracing back
> things to understand when it's called (or not) with respect to updates. I
> found myself reading the init() method of HttpSolrCall (because a lot of
> logic around request types happens on init() ) and the very first thing in
> the method is an if block regarding some sort of alternate path
> substitution involving "managementPath"... This is apparently a valid
> config item in solr.xml, so trying to figure out if I cared about this or
> not I went to find out what it was... there was a confusing comment in
> code, and a example of it being set in the file solr-50--all.xml, but that
> appears to just test that setting it in the xml causes it to appear in the
> config java object and then I finally found the documentation for it in
> the ref guide:
>
> managementPath
>
> Currently non-operational.
>
> (https://lucene.apache.org/solr/guide/7_3/format-of-solr-xml.html)
>
> In other words I just spent 15-20 min tracking down something non-useful
> that apparently was originally added in SOLR-695 (which says it was
> committed to encourage review, but nobody other  than the author commented
> on it thereafter)
>
> This looks a lot like dead, or at least questionable code... perhaps it
> should be removed so that folks don't have to spend time figuring out that
> it is non-operational any time they read the code?
>
> Or is it operational/useful but undocumented?
>
> -Gus
>
> --
> http://www.the111shift.com
>
-- 
- Mark
about.me/markrmiller


[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488104#comment-16488104
 ] 

Uwe Schindler commented on SOLR-12386:
--

GetResource() may also return null, if the lookup of a file name does not work. 
Happens easily when you run out of file handles and the JAR file to be search 
in was closed by another call before.

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488095#comment-16488095
 ] 

Hoss Man commented on SOLR-12386:
-

{quote}All other classloader methods are behaving the same way, so how to load 
resources then?
{quote}
Not true – getResourceAsStream explicitly swallows any IOExceptions (which 
might be thrown if there are too many open files) and returns "null" if they 
are encountered – we can do the same thing but actually catch & wrap/rethrow 
the IOExceptions by calling {{ClassLoader.getResource() + 
URL.openConnection()}} instead.

As i said: see SOLR-12021...
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blobdiff;f=solr/solrj/src/java/org/apache/solr/common/util/Utils.java;h=d35486e6c7688c4b32d8bd6840e590a36b4a5ab2;hp=4ab24d2be3e2aaf39b041c2a3676f456040b5e58;hb=9e0e301;hpb=df0f141907b0701d7b1f1fc297ae33ef901844a0




> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Management Path?

2018-05-23 Thread Gus Heck
I was just trying to understand the role of ShardHandler, and tracing back
things to understand when it's called (or not) with respect to updates. I
found myself reading the init() method of HttpSolrCall (because a lot of
logic around request types happens on init() ) and the very first thing in
the method is an if block regarding some sort of alternate path
substitution involving "managementPath"... This is apparently a valid
config item in solr.xml, so trying to figure out if I cared about this or
not I went to find out what it was... there was a confusing comment in
code, and a example of it being set in the file solr-50--all.xml, but that
appears to just test that setting it in the xml causes it to appear in the
config java object and then I finally found the documentation for it in
the ref guide:

managementPath

Currently non-operational.

(https://lucene.apache.org/solr/guide/7_3/format-of-solr-xml.html)

In other words I just spent 15-20 min tracking down something non-useful
that apparently was originally added in SOLR-695 (which says it was
committed to encourage review, but nobody other  than the author commented
on it thereafter)

This looks a lot like dead, or at least questionable code... perhaps it
should be removed so that folks don't have to spend time figuring out that
it is non-operational any time they read the code?

Or is it operational/useful but undocumented?

-Gus

-- 
http://www.the111shift.com


[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488088#comment-16488088
 ] 

Uwe Schindler commented on SOLR-12386:
--

You can't make getResourceAsStream a forbidden-api, because this method(s) is 
basically all Java resource handling is going through. All other classloader 
methods are behaving the same way, so how to load resources then?

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488087#comment-16488087
 ] 

Jan Høydahl commented on SOLR-10299:


I propose that building the HTML version of the ref guide builds a static index 
using GO, it will first try find local golang install, if not found it will 
attempt “docker run go...” and if that fails it will skip the search index. 
Meaning that the RM for refguide in the future either needs go or docker 
installed on the build machine.

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22088 - Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22088/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState

Error Message:
Newly added node was not present in event message

Stack Trace:
java.lang.AssertionError: Newly added node was not present in event message
at 
__randomizedtesting.SeedInfo.seed([46DAE7F641F56910:88744365B9CC1106]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:306)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14478 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-23 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488082#comment-16488082
 ] 

Hoss Man commented on SOLR-12386:
-

This overall symptom sounds really familar – see SOLR-12021.

I wonder if the root cause here is similar to what I found in that jira?
{quote}[resource files] are being loaded with 
{{...getClassLoader().getResourceAsStream(resourceName)}} – but nothing is ever 
closing the stream, so it can eventually (depending on what test classes run in 
each JVM and how many files they try to open like this) cause the JVM to hit 
the ulimit for open file handles – but that specific cause of the failure is 
never reported, because {{ClassLoader.getResourseAsStream(...)}} is explicity 
designed to swallow any IOExceptions encountered and just returns "null"...
{quote}
...allthough there was definitely some leaked InputStreams in that jira, there 
wouldn't neccessarily even have to be a resource leak to see similar problems: 
if individual tests are opening a lot of cores concurrently, they could be 
hitting the ulimit on jenkins ("randomly" depending on what other tests were 
run in the same JVM causing a variable number of open file handles to various 
class files held open by the current system classloader) but instead of 
throwing a clean error to that effect getSourceAsStream just returns null and 
causes the SolrResourceLoader to assume it doesn't exist ... maybe?

(we should probably consider making {{ClassLoader.getResourceAsStream}} a 
forbidden API to prevent this risk even if it's *not* the cause of the current 
failures.)

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488026#comment-16488026
 ] 

ASF subversion and git services commented on SOLR-12378:


Commit 48bd259516b8d78c991239fe7cf3340c90f582e5 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=48bd259 ]

SOLR-12378: Support missing versionField on indexed docs in 
DocBasedVersionConstraintsURP.


> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1887 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1887/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.HttpTriggerListenerTest

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:65489 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:65489 within 3 ms
at __randomizedtesting.SeedInfo.seed([4A5F3296DB28BCDE]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:183)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:120)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:102)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForAllNodes(MiniSolrCloudCluster.java:269)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:263)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:198)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.cloud.autoscaling.HttpTriggerListenerTest.setupCluster(HttpTriggerListenerTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:65489 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:232)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:175)
... 32 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.HttpTriggerListenerTest

Error Message:
23 threads leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.HttpTriggerListenerTest: 1) 
Thread[id=6396, name=Scheduler-476496324, state=TIMED_WAITING, 
group=TGRP-HttpTriggerListenerTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 1965 - Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1965/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([FFC5019095B77A82:9C0E37120C7809AF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14628 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-12393) ExpandComponent only calculates the score of expanded docs when sorted by score

2018-05-23 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487949#comment-16487949
 ] 

David Smiley commented on SOLR-12393:
-

This test exposes the problem.  The fix is probably straight-forward in 
ExpandComponent when the collector is created.  It ought to consider 
ReturnFields.wantsScore().

> ExpandComponent only calculates the score of expanded docs when sorted by 
> score
> ---
>
> Key: SOLR-12393
> URL: https://issues.apache.org/jira/browse/SOLR-12393
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12393.patch
>
>
> If you use the ExpandComponent to show expanded docs and if you want the 
> score back (specified in "fl"), it will be NaN if the expanded docs are 
> sorted by anything other than the default score descending.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12393) ExpandComponent only calculates the score of expanded docs when sorted by score

2018-05-23 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12393:

Attachment: SOLR-12393.patch

> ExpandComponent only calculates the score of expanded docs when sorted by 
> score
> ---
>
> Key: SOLR-12393
> URL: https://issues.apache.org/jira/browse/SOLR-12393
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12393.patch
>
>
> If you use the ExpandComponent to show expanded docs and if you want the 
> score back (specified in "fl"), it will be NaN if the expanded docs are 
> sorted by anything other than the default score descending.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487946#comment-16487946
 ] 

Mark Miller commented on SOLR-10299:


Just offering my opinion. Things like that tend to have a lot of energy in 
launching but lose attention over the months and years and suck away effort 
that may be better spent on things more core to the project. Others can weigh 
in and we can see what the consensus is. If there is a large enough group that 
thinks they will keep such a service going, I wouldn't try and stop it.

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12375) ScoreMode not always set correctly in Solr queries

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487943#comment-16487943
 ] 

ASF subversion and git services commented on SOLR-12375:


Commit 11fb992abb3d209ab34a50956f2affe9626380b0 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11fb992 ]

SOLR-12375: Optimize Lucene needsScore / ScoreMode use:
* A non-cached filter query could be told incorrectly that scores were needed.
* The /export (ExportQParserPlugin) would declare incorrectly that scores are 
needed.
* Expanded docs (expand component) could be told incorrectly that scores are 
needed.

note: non-trivial changes back-ported; ScoreMode is in master; 7x has 
needsScore.

(cherry picked from commit 53a3de3)


> ScoreMode not always set correctly in Solr queries
> --
>
> Key: SOLR-12375
> URL: https://issues.apache.org/jira/browse/SOLR-12375
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 5.1, 7.3.1
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12375.patch
>
>
> A query can be informed that scores are not needed based on it's context/use, 
> and some queries are able to operate more efficiently if it knows this 
> up-front.  This is about the ScoreMode enum.
> I reviewed the use of {{ScoreMode.COMPLETE}} in Solr and I think we should 
> make the following changes:
> Solr filter queries (fq) are non-scoring.  
> {{SolrIndexSearcher.getProcessedFilter}} will pass ScoreMode.COMPLETE when it 
> ought to be COMPLETE_NO_SCORES to createWeight.  This perf bug is only 
> applicable when the filter query is not cached (either cache=false 
> local-param or no filter cache).  This error was made in LUCENE-6220 (Solr 
> 5.1); at that time it was a boolean.
> The {{/export}} handler (more specifically ExportQParserPlugin) is also 
> affected; it's COMPLETE when it should always be COMPLETE_NO_SCORES.  Also 
> appears to be in error since Solr 5.1.
> SolrIndexSearcher.getDocListAndSetNC ought to use TOP_SCORES to track the 
> top-score to be more correct but it's a distinction without a difference 
> since MultiCollector.wrap with the DocSetCollector will combine it with 
> COMPLETE_NO_SCORES to conclude the result is COMPLETE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487935#comment-16487935
 ] 

Shawn Heisey commented on SOLR-10299:
-

[~markrmil...@gmail.com], I have been discussing this very issue with Infra on 
their hipchat channel, then I saw your comment.  One idea they had was a VM (or 
maybe two for fault tolerance, I intended to ask about 3 for SolrCloud).  That 
solution would serve documentation pages and access Solr locally.  The project 
would be responsible for the software on the VM.

If there's strong opposition, I can drop the discussion with Infra.

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12375) ScoreMode not always set correctly in Solr queries

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487924#comment-16487924
 ] 

ASF subversion and git services commented on SOLR-12375:


Commit 53a3de3b98a5a06146a33251c176b7e4475270e4 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=53a3de3 ]

SOLR-12375: Optimize Lucene ScoreMode use:
* A non-cached filter query could be told incorrectly that scores were needed.
* The /export (ExportQParserPlugin) would declare incorrectly that scores are 
needed.
* Expanded docs (expand component) could be told incorrectly that scores are 
needed.


> ScoreMode not always set correctly in Solr queries
> --
>
> Key: SOLR-12375
> URL: https://issues.apache.org/jira/browse/SOLR-12375
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 5.1, 7.3.1
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12375.patch
>
>
> A query can be informed that scores are not needed based on it's context/use, 
> and some queries are able to operate more efficiently if it knows this 
> up-front.  This is about the ScoreMode enum.
> I reviewed the use of {{ScoreMode.COMPLETE}} in Solr and I think we should 
> make the following changes:
> Solr filter queries (fq) are non-scoring.  
> {{SolrIndexSearcher.getProcessedFilter}} will pass ScoreMode.COMPLETE when it 
> ought to be COMPLETE_NO_SCORES to createWeight.  This perf bug is only 
> applicable when the filter query is not cached (either cache=false 
> local-param or no filter cache).  This error was made in LUCENE-6220 (Solr 
> 5.1); at that time it was a boolean.
> The {{/export}} handler (more specifically ExportQParserPlugin) is also 
> affected; it's COMPLETE when it should always be COMPLETE_NO_SCORES.  Also 
> appears to be in error since Solr 5.1.
> SolrIndexSearcher.getDocListAndSetNC ought to use TOP_SCORES to track the 
> top-score to be more correct but it's a distinction without a difference 
> since MultiCollector.wrap with the DocSetCollector will combine it with 
> COMPLETE_NO_SCORES to conclude the result is COMPLETE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 663 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/663/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest

Error Message:
Tlog size exceeds the max size bound. Tlog path: 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.update.MaxSizeAutoCommitTest_DFE09E86E73CDF0B-001/init-core-data-001/tlog/tlog.001,
 tlog size: 4520

Stack Trace:
java.lang.AssertionError: Tlog size exceeds the max size bound. Tlog path: 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.update.MaxSizeAutoCommitTest_DFE09E86E73CDF0B-001/init-core-data-001/tlog/tlog.001,
 tlog size: 4520
at 
__randomizedtesting.SeedInfo.seed([DFE09E86E73CDF0B:CFAE7B799C92E6FA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.getTlogFileSizes(MaxSizeAutoCommitTest.java:379)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest(MaxSizeAutoCommitTest.java:200)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-12375) ScoreMode not always set correctly in Solr queries

2018-05-23 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487908#comment-16487908
 ] 

David Smiley commented on SOLR-12375:
-

>From a W.I.P. test for expanded docs they don't return the score today 
>regardless of what I do in this patch.  I'll attach that to SOLR-12393.

So I'll commit the current patch shortly.

> ScoreMode not always set correctly in Solr queries
> --
>
> Key: SOLR-12375
> URL: https://issues.apache.org/jira/browse/SOLR-12375
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 5.1, 7.3.1
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12375.patch
>
>
> A query can be informed that scores are not needed based on it's context/use, 
> and some queries are able to operate more efficiently if it knows this 
> up-front.  This is about the ScoreMode enum.
> I reviewed the use of {{ScoreMode.COMPLETE}} in Solr and I think we should 
> make the following changes:
> Solr filter queries (fq) are non-scoring.  
> {{SolrIndexSearcher.getProcessedFilter}} will pass ScoreMode.COMPLETE when it 
> ought to be COMPLETE_NO_SCORES to createWeight.  This perf bug is only 
> applicable when the filter query is not cached (either cache=false 
> local-param or no filter cache).  This error was made in LUCENE-6220 (Solr 
> 5.1); at that time it was a boolean.
> The {{/export}} handler (more specifically ExportQParserPlugin) is also 
> affected; it's COMPLETE when it should always be COMPLETE_NO_SCORES.  Also 
> appears to be in error since Solr 5.1.
> SolrIndexSearcher.getDocListAndSetNC ought to use TOP_SCORES to track the 
> top-score to be more correct but it's a distinction without a difference 
> since MultiCollector.wrap with the DocSetCollector will combine it with 
> COMPLETE_NO_SCORES to conclude the result is COMPLETE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12393) ExpandComponent only calculates the score of expanded docs when sorted by score

2018-05-23 Thread David Smiley (JIRA)
David Smiley created SOLR-12393:
---

 Summary: ExpandComponent only calculates the score of expanded 
docs when sorted by score
 Key: SOLR-12393
 URL: https://issues.apache.org/jira/browse/SOLR-12393
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SearchComponents - other
Reporter: David Smiley


If you use the ExpandComponent to show expanded docs and if you want the score 
back (specified in "fl"), it will be NaN if the expanded docs are sorted by 
anything other than the default score descending.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487890#comment-16487890
 ] 

Mark Miller commented on SOLR-10299:


If you could use Solr in an embedded way that requires minimal to no 
maintenance, okay great. But Solr does not excel at that.

If someone figures out how to use Solr in a convenient way, great. But it's a 
slippery slope. What if search goes down? Don't we work on SolrCloud? Shouldn't 
it be fault tolerant and always running? Shouldn't the search experience itself 
be A++ in all cases?

I'm open to any solution that ends up making sense. But from my viewpoint, we 
just want some good enough keyword search that we don't have to maintain, and 
then for something better we can clearly link out to managed services like 
LucidFind that will showcase Solr much better than we will over the long term.

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12374) Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)

2018-05-23 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487870#comment-16487870
 ] 

Lucene/Solr QA commented on SOLR-12374:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} clustering in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 49s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.sim.TestTriggerIntegration |
|   | solr.cloud.autoscaling.IndexSizeTriggerTest |
|   | solr.cloud.autoscaling.SearchRateTriggerTest |
|   | solr.cloud.autoscaling.sim.TestLargeCluster |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12374 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924607/SOLR-12374.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / d32ce81 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/103/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/103/testReport/ |
| modules | C: solr/contrib/clustering solr/core U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/103/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)
> -
>
> Key: SOLR-12374
> URL: https://issues.apache.org/jira/browse/SOLR-12374
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12374.patch
>
>
> I propose adding the following to SolrCore:
> {code:java}
>   /**
>* Executes the lambda with the {@link SolrIndexSearcher}.  This is more 
> convenience than using
>* {@link #getSearcher()} since there is no ref-counting business to worry 
> about.
>* Example:
>* 
>*   IndexReader reader = 
> h.getCore().withSearcher(SolrIndexSearcher::getIndexReader);
>* 
>*/
>   @SuppressWarnings("unchecked")
>   public  R withSearcher(Function lambda) {
> final RefCounted refCounted = getSearcher();
> try {
>   return lambda.apply(refCounted.get());
> } finally {
>   refCounted.decref();
> }
>   }
> {code}
> This is a nice tight convenience method, avoiding the clumsy RefCounted API 
> which is easy to accidentally incorrectly use – see 
> https://issues.apache.org/jira/browse/SOLR-11616?focusedCommentId=16477719=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16477719
> I guess my only (small) concern is if hypothetically you might make the 
> lambda short because it's easy to do that (see the one-liner example 

[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487865#comment-16487865
 ] 

Alexandre Rafalovitch commented on SOLR-10299:
--

[~markrmil...@gmail.com] - could you explain a bit more about "not using" Solr. 
I understanding "not hosting" (option 1, and 2) but you seem to also want to 
exclude option 3 and 4. 

As a straw man argument: since we are already building the ref guide as part of 
the process, we could theoretically index the just-generated ref guide into 
just-compiled Solr and put the generated index into example folder to ship with 
a custom Solr config (similar to /browse example). For size purposes, it could 
refer to the stable URL locations to display actual pages and not store any 
fields. That would a variant of the option 3. I am reading you as -0 on that, 
but not clear on why.

Just to re-clarify, I feel this issue is both technical (current search sucks, 
etc) and also community/politics optics similar to "the ugly homepage" issue we 
had until a new one got sponsored. I think, ideally, we would discuss both 
aspects and find a solution advancing both halves of the issue.

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487842#comment-16487842
 ] 

Mark Miller commented on SOLR-10299:


I support any option that does not involve using or hosting Solr. People may 
think that is what we do and so it’s silly we don’t do it for our ref guide, 
but hosting and maintaining a search service and experience is not what we do, 
it is very much not suited for what we do.

I really like the sound of Jan’s solution until the dependency issue came up.

If sponsers want to host something that matches what they do, that is great, 
but we should not rely on it. 

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-05-23 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-12343:

Attachment: SOLR-12343.patch

> JSON Field Facet refinement can return incorrect counts/stats for sorted 
> buckets
> 
>
> Key: SOLR-12343
> URL: https://issues.apache.org/jira/browse/SOLR-12343
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-12343.patch, SOLR-12343.patch
>
>
> The way JSON Facet's simple refinement "re-sorts" buckets after refinement 
> can cause _refined_ buckets to be "bumped out" of the topN based on the 
> refined counts/stats depending on the sort - causing _unrefined_ buckets 
> originally discounted in phase#2 to bubble up into the topN and be returned 
> to clients *with inaccurate counts/stats*
> The simplest way to demonstrate this bug (in some data sets) is with a 
> {{sort: 'count asc'}} facet:
>  * assume shard1 returns termX & termY in phase#1 because they have very low 
> shard1 counts
>  ** but *not* returned at all by shard2, because these terms both have very 
> high shard2 counts.
>  * Assume termX has a slightly lower shard1 count then termY, such that:
>  ** termX "makes the cut" off for the limit=N topN buckets
>  ** termY does not make the cut, and is the "N+1" known bucket at the end of 
> phase#1
>  * termX then gets included in the phase#2 refinement request against shard2
>  ** termX now has a much higher _known_ total count then termY
>  ** the coordinator now sorts termX "worse" in the sorted list of buckets 
> then termY
>  ** which causes termY to bubble up into the topN
>  * termY is ultimately included in the final result _with incomplete 
> count/stat/sub-facet data_ instead of termX
>  ** this is all indepenent of the possibility that termY may actually have a 
> significantly higher total count then termX across the entire collection
>  ** the key problem is that all/most of the other terms returned to the 
> client have counts/stats that are the cumulation of all shards, but termY 
> only has the contributions from shard1
> Important Notes:
>  * This scenerio can happen regardless of the amount of overrequest used. 
> Additional overrequest just increases the number of "extra" terms needed in 
> the index with "better" sort values then termX & termY in shard2
>  * {{sort: 'count asc'}} is not just an exceptional/pathelogical case:
>  ** any function sort where additional data provided shards during refinement 
> can cause a bucket to "sort worse" can also cause this problem.
>  ** Examples: {{sum(price_i) asc}} , {{min(price_i) desc}} , {{avg(price_i) 
> asc|desc}} , etc...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-05-23 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487839#comment-16487839
 ] 

Hoss Man commented on SOLR-12343:
-

{quote}... I think it should just be considered a bug.
{quote}
That's pretty much my feeling, but I wasn't sure.
{quote}Truncating the list of buckets to N before the refinement phase would 
fix the bug, but it would also throw away complete buckets that could make it 
into the top N after refinement.
{quote}
oh right ... yeah, i was forgetting about buckets that got data from all shards 
in phase #1.
{quote}Exactly which buckets we chose to refine (and exactly how many) can 
remain an implementation detail. ...
{quote}
right ... it can be heuristically determined, and very conservative in cases 
where we know it doesn't matter – but i still think there should be an explicit 
option...

I worked up a patch similar to the straw man i outlined above – except that i 
didn't add the {{refine:required}} variant since we're in agreement that this 
is a bug.

In the new patch:
 * buckets now keep track of how many shards contributed to them
 ** I did this with a quick and dirty BitSet instead of an {{int 
numShardsContributing}} counter since we have to handle the possibility that 
{{mergeBuckets()}} will get called more then once for a single shard when we 
have partial refinement of sub-facets
 ** there's a nocommit in here about the possibility of re-using the 
{{Context.sawShard}} BitSet instead – but i couldn't wrap my head around an 
efficient way to do it so i punted
 * during the final "pruning" in {{FacetFieldMerger.getMergedResult()}} buckets 
are excluded if a bucket doesn't have contributions from as many shards as the 
FacetField
 ** again, i needed a new BitSet in at the FacetField level to count the shards 
– because {{Context.numShards}} may include shards that never return *any* 
results for the facet (ie: empty shard) so they never merge any data at all)
 * there is a new {{overrefine:N}} option which works similar to overrequest – 
but instead of determining how many "extra" terms to request in phase#1, it 
determines how many "extra" buckets should be in {{numBucketsToCheck}} for 
refinement in phase #2 (but if some buckets are already fully populated in 
phase #2, then the actual number "refined" in phase#2 can be lower then 
limit+overrefine)
 ** the default hueristic currently pays attention to the sort – since (IIUC) 
{{count desc}} and {{index asc|desc}} should never need any "over refinement" 
unless {{mincount > 1}}
 ** if we have a non-trivial sort, and the user specified an explicit 
{{overrequest:N}} then the default hueristic for {{overrefine}} uses the same 
value {{N}}
 *** because i'm assuming if people have explicitly requested {{sort:SPECIAL, 
refine:true, overrequest:N}} then they care about the accuracy of the the terms 
to some degree N, and the bigger N is the more we should care about 
over-refinement as well.
 ** if neither {{overrequest}} or {{overrefine}} are explicitly set, then we 
use the same {{limit * 1.1 + 4}} type hueristic as {{overrequest}}
 ** there's another nocommit here though: if we're using a hueritic, should we 
be scaling the derived {{numBucketsToCheck}} based on {{mincount}} ? ... if 
{{mincount=M > 1}} should we be doing something like {{numBucketsToCheck *= M}} 
??
 *** although, thinking about it now – this kind of mincount based factor would 
probably make more sense in the {{overrequest}} hueristic? maybe for 
{{overrefine}} we should look at how many buckets were already fully populated 
in phase#1 _AND_ meet the mincount, and use the the difference between that 
number and the limit to decide a scaling factor?
 *** either way: can probably TODO this for a future enhancement.
 * Testing wise...
 ** These changes fix the problems in previous test patch
 ** I've also added some more tests, but there's nocommit's to add a lot more 
including verification of nested facets
 ** I didn't want to go too deep down the testing rabbit hole until i was sure 
we wanted to go this route.

what do you think?

> JSON Field Facet refinement can return incorrect counts/stats for sorted 
> buckets
> 
>
> Key: SOLR-12343
> URL: https://issues.apache.org/jira/browse/SOLR-12343
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-12343.patch
>
>
> The way JSON Facet's simple refinement "re-sorts" buckets after refinement 
> can cause _refined_ buckets to be "bumped out" of the topN based on the 
> refined counts/stats depending on the sort - causing _unrefined_ buckets 
> originally discounted in phase#2 to bubble up into the topN and be returned 
> to clients *with 

[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487795#comment-16487795
 ] 

Alexandre Rafalovitch commented on SOLR-10299:
--

Forgot to mention that any option that is not an *official documentation* link 
means the discoverability drops like a rock. Though, in all truth, Google's 
discoverability of the Reference Guide (and not the Wiki) is also a challenge 
right now, but two minuses do not make a plus.

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487784#comment-16487784
 ] 

Alexandre Rafalovitch commented on SOLR-10299:
--

I still think it is a bad optics to _not_ use Solr to search Reference Guide. 
The challenge is who/where to host it. Seems like five options:
 # A sponsor (implies some commitment) hosts it with the implementation being a 
mini Open-Source project
 # A third-party individual hosts it, like Mike does with Jirasearch, bearing 
bandwidth and security costs
 # We bake the self-hosted example into Solr as an actual example.
 # We make it as a self-hosted example for a local Solr, but then people have 
to jump through the hoops
 # We do some sort of static-hosting+Javascript (like Jan's effort), but then 
there is an issue of tool-chain and it does not showcase Solr (bad optics)

What did I miss?

 

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12391:
-
Priority: Major  (was: Blocker)

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12391:
-
Fix Version/s: (was: 7.4)

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487743#comment-16487743
 ] 

Cassandra Targett commented on SOLR-12391:
--

Andrzej reminded me offline that this problem only exists on master, not in 
branch_7x (I misread that statement), so removing the blocker & fixVersion.

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Blocker
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487735#comment-16487735
 ] 

Cassandra Targett commented on SOLR-12391:
--

IMO, yes it should be a blocker. Even if it's not easy we shouldn't release it.

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4
>
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12391:
-
Fix Version/s: 7.4

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 7.4
>
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12391:
-
Priority: Blocker  (was: Major)

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 7.4
>
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487734#comment-16487734
 ] 

Erick Erickson commented on SOLR-12391:
---

Should this be a blocker? If it's easy I'd sure hate to release it...

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487726#comment-16487726
 ] 

Erick Erickson commented on SOLR-12390:
---

One thing to do is go to the "other formats" drop-down at the top and then 
"archived PDFs" and download the entire PDF file for the reference guide and 
search _that_.



> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 62 - Still Unstable

2018-05-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/62/

4 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete

Error Message:
Error from server at 
https://127.0.0.1:48926/solr/testcollection_shard1_replica_n3: Expected mime 
type application/octet-stream but got text/html.Error 404 
Can not find: /solr/testcollection_shard1_replica_n3/update  
HTTP ERROR 404 Problem accessing 
/solr/testcollection_shard1_replica_n3/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.10.v20180503  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:48926/solr/testcollection_shard1_replica_n3: 
Expected mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/testcollection_shard1_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/testcollection_shard1_replica_n3/update. Reason:
Can not find: 
/solr/testcollection_shard1_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.10.v20180503




at 
__randomizedtesting.SeedInfo.seed([951DFCBA5992E48A:36E7521FDE7A0E2F]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete(TestCollectionsAPIViaSolrCloudCluster.java:170)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: BadApple candidates

2018-05-23 Thread Erick Erickson
OK, not BadAppl-ing those tests.

I _think_ that my change to go from Hoss' test results as a baseline
will automagically just use 7x and master. Before I was copy/pasting
from messages to the dev list and if I didn't pay close enough
attention to the header I'd pick up other builds.

Erick

On Tue, May 22, 2018 at 5:14 PM, David Smiley  wrote:
> Please don't bad-apple:
> * CreateRoutedAliasTest.  The failure I observed, thetaphi build 226  was on
> branch_7_3 which does not have SOLR-12308 (which is on master and branch7x)
> that I think solves that failure.
>
> Could you change your failure reporting to only considers master & branch7x?
>
> I just now applied @AwaitsFix to ConcurrentCreateRoutedAliasTest (similar
> name but not same) after filing SOLR-12386 for it -- the infamous (to us)
> "Can't find resource" relating to a configset file that ought to be there.
>
> AFAICT there is no other "alias" related fails pending.
>
> On Mon, May 21, 2018 at 11:01 AM Erick Erickson 
> wrote:
>>
>> I'm going to change how I collect the badapple candidates. After
>> getting a little
>> overwhelmed by the number of failure e-mails (even ignoring the ones with
>> BadApple enabled), "It come to me in a vision! In a flash!"" (points if
>> you
>> know where that comes from, hint: Old music involving a pickle).
>>
>> Since I collect failures for a week then run filter them by what's
>> also in Hoss's
>> results from two  weeks ago, that's really equivalent to creating the
>> candidate
>> list from the intersection of the most recent week of Hoss's results and
>> the
>> results from _three_ weeks ago. Much faster too. Thanks Hoss!
>>
>> So that's what I'll do going forward.
>>
>> Meanwhile, here's the list for this Thursday.
>>
>> BadApple candidates: I'll BadApple these on Thursday unless there are
>> objections
>>   org.apache.lucene.search.TestLRUQueryCache.testBulkScorerLocking
>>org.apache.solr.TestDistributedSearch.test
>>org.apache.solr.cloud.AddReplicaTest.test
>>org.apache.solr.cloud.AssignBackwardCompatibilityTest.test
>>
>> org.apache.solr.cloud.CreateRoutedAliasTest.testCollectionNamesMustBeAbsent
>>org.apache.solr.cloud.CreateRoutedAliasTest.testTimezoneAbsoluteDate
>>org.apache.solr.cloud.CreateRoutedAliasTest.testV1
>>org.apache.solr.cloud.CreateRoutedAliasTest.testV2
>>
>> org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica
>>org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader
>>org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest
>>
>> org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection
>>org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove
>>org.apache.solr.cloud.RestartWhileUpdatingTest.test
>>
>> org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader
>>org.apache.solr.cloud.TestPullReplica.testCreateDelete
>>org.apache.solr.cloud.TestPullReplica.testKillLeader
>>org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
>>org.apache.solr.cloud.UnloadDistributedZkTest.test
>>
>> org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests
>>
>> org.apache.solr.cloud.api.collections.CustomCollectionTest.testCustomCollectionsAPI
>>org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost
>>
>> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration
>>
>> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration
>>org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger
>>org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState
>>
>> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testBelowSearchRate
>>
>> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode
>>org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger
>>org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test
>>org.apache.solr.cloud.hdfs.StressHdfsTest.test
>>org.apache.solr.handler.TestSQLHandler.doTest
>>org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth
>>org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit
>>org.apache.solr.update.TestHdfsUpdateLog.testFSThreadSafety
>>org.apache.solr.update.TestInPlaceUpdatesDistrib.test
>>
>>
>> Number of AwaitsFix: 21 Number of BadApples: 99
>>
>> *AwaitsFix Annotations:
>>
>>
>> Lucene AwaitsFix
>> GeoPolygonTest.java
>>testLUCENE8276_case3()
>>
>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8276;)
>>
>> GeoPolygonTest.java
>>testLUCENE8280()
>>
>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8280;)
>>
>> GeoPolygonTest.java
>>testLUCENE8281()
>>
>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281;)
>>
>> RandomGeoPolygonTest.java
>>

[jira] [Commented] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-23 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487687#comment-16487687
 ] 

Lucene/Solr QA commented on SOLR-12358:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} SOLR-12358 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924691/SOLR-12358.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/102/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, SOLR-12358.patch, SOLR-12358.patch, 
> SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1030 - Still Failing

2018-05-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1030/

No tests ran.

Build Log:
[...truncated 24174 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2212 links (1766 relative) to 3083 anchors in 245 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[jira] [Commented] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487629#comment-16487629
 ] 

Shawn Heisey commented on SOLR-12390:
-

That is the source that generates the documentation both the PDF version and 
the HTML version that you can get to on the Solr website.  Building the guide 
creates a very simple index of page titles.  The search box on the generated 
website searches that index.

If you want to attempt something more capable, feel free to check out the 
source and contribute a patch.

https://wiki.apache.org/solr/HowToContribute

SOLR-10299 discusses some ideas and provides a link where one of them has been 
implemented.

One roadblock to "real" search capability is that the reference guide is static 
content, served by a fairly basic webserver.  Providing real search capability 
typically requires special infrastructure beyond that.  Getting special 
resources from Apache INFRA usually requires significant justification.


> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487616#comment-16487616
 ] 

Shawn Heisey commented on SOLR-10299:
-

[~janhoy], that implementation looks promising!

My attempts to try and find out the capabilities of the library you're using 
fell flat.  Are there any options for improving relevancy ranking?  Try putting 
'admin' in the search box.  The results are not terrible, but if matches in the 
page title were to get a boost, I think it would be better.

I did notice that the library uses stopwords. if that is removed, does it 
greatly inflate the index size?


> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12381) facet query causes down replicas

2018-05-23 Thread kiarash (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487582#comment-16487582
 ] 

kiarash edited comment on SOLR-12381 at 5/23/18 4:27 PM:
-

Thank you very much for your consideration.

could you please know what oom_killer boundary means?
 Would you mind providing some details how I can set SOLR_JAVA_M. As its 
suggested, I have set it a half of the physical memory(SOLR_JAVA_ME="-Xms512m 
-Xmx15240m").

In addition, I wanted to know if my problem is a bug which won't be fixed in 
version 6.


was (Author: zahirnia):
Thank you very much for your consideration.

could you please know what oom_killer boundary means?
Would you mind providing some details how I can set SOLR_JAVA_M. As its 
suggested, I have set it a half of the physical memory(SOLR_JAVA_ME="-Xms512m 
-Xmx10240m").

In addition, I wanted to know if my problem is a bug which won't be fixed in 
version 6.

> facet query causes down replicas
> 
>
> Key: SOLR-12381
> URL: https://issues.apache.org/jira/browse/SOLR-12381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: kiarash
>Priority: Major
>
> Cluster description:
> I have a solr cluster with 3 nodes(node1, node2, node3).
> Each node has:
> 30 GB memory.
> 3 TB SATA Disk
> My cluster involves 5 collections which contain more than a billion document.
> I have a collection(news_archive collection) which contain 30 million 
> document. This collection is divided into 3 shards which each of them 
> contains 10 million document and occupies 100GB on the Disk. Each of the 
> shards has 3 replicas.
> Each of the cluster nodes contains one of the replicas of each shard. in 
> fact, the nodes are similar, i.e:
> node1 contains:
> shard1_replica1
> shard2_replica1
> shard3_replica1
> node2 contains:
> shard1_replica2
> shard2_replica2
> shard3_replica2
> node3 contains:
> shard1_replica3
> shard2_replica3
> shard3_replica3
> Problem description:
> when I run a heavy facet query, 
> such as 
> http://Node1IP:/solr/news_archive/select?q=*:*=pubDate:[2018-1-18T12:06:57Z%20TO%202018-4-18T12:06:57Z]=ngram_content=true=1=2000=0=json,
>  the solr instances are killed by the OOM killer in almost all of the nodes.
> I found the bellow log in 
> solr/logs/solr_oom_killer--2018-05-21_19_17_41.log in each of the solr 
> instances,
> "Running OOM killer script for process 2766 for Solr on port 
> Killed process 2766"
> It seems that the query is routed into different nodes of the clusters and 
> with attention to exhaustively use of memory which is caused by the query the 
> solr instances are killed by OOM Killer.
>  
> despite the fact that how the query is memory demanding, I think the 
> cluster's nodes should be preserved from being killed by any read query.
> for example by limiting the amount of memory which can be used by any query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12381) facet query causes down replicas

2018-05-23 Thread kiarash (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487582#comment-16487582
 ] 

kiarash commented on SOLR-12381:


Thank you very much for your consideration.

could you please know what oom_killer boundary means?
Would you mind providing some details how I can set SOLR_JAVA_M. As its 
suggested, I have set it a half of the physical memory(SOLR_JAVA_ME="-Xms512m 
-Xmx10240m").

In addition, I wanted to know if my problem is a bug which won't be fixed in 
version 6.

> facet query causes down replicas
> 
>
> Key: SOLR-12381
> URL: https://issues.apache.org/jira/browse/SOLR-12381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: kiarash
>Priority: Major
>
> Cluster description:
> I have a solr cluster with 3 nodes(node1, node2, node3).
> Each node has:
> 30 GB memory.
> 3 TB SATA Disk
> My cluster involves 5 collections which contain more than a billion document.
> I have a collection(news_archive collection) which contain 30 million 
> document. This collection is divided into 3 shards which each of them 
> contains 10 million document and occupies 100GB on the Disk. Each of the 
> shards has 3 replicas.
> Each of the cluster nodes contains one of the replicas of each shard. in 
> fact, the nodes are similar, i.e:
> node1 contains:
> shard1_replica1
> shard2_replica1
> shard3_replica1
> node2 contains:
> shard1_replica2
> shard2_replica2
> shard3_replica2
> node3 contains:
> shard1_replica3
> shard2_replica3
> shard3_replica3
> Problem description:
> when I run a heavy facet query, 
> such as 
> http://Node1IP:/solr/news_archive/select?q=*:*=pubDate:[2018-1-18T12:06:57Z%20TO%202018-4-18T12:06:57Z]=ngram_content=true=1=2000=0=json,
>  the solr instances are killed by the OOM killer in almost all of the nodes.
> I found the bellow log in 
> solr/logs/solr_oom_killer--2018-05-21_19_17_41.log in each of the solr 
> instances,
> "Running OOM killer script for process 2766 for Solr on port 
> Killed process 2766"
> It seems that the query is routed into different nodes of the clusters and 
> with attention to exhaustively use of memory which is caused by the query the 
> solr instances are killed by OOM Killer.
>  
> despite the fact that how the query is memory demanding, I think the 
> cluster's nodes should be preserved from being killed by any read query.
> for example by limiting the amount of memory which can be used by any query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12388) Enable a strict ZooKeeper-connected search request mode, in which search requests will fail when the coordinating node can't communicate with ZooKeeper

2018-05-23 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487561#comment-16487561
 ] 

Steve Rowe commented on SOLR-12388:
---

bq. I think Steve refers to cluster changes that can happen and a node might 
have missed out on hearing about.

Right.  Solr already handles these conditions, as mentioned in the description, 
via the {{zkConnected}} header.  This issue just enables callers to get a 
failure response instead of having to conditionally handle responses based on 
the value of the {{zkConnected}} header.


> Enable a strict ZooKeeper-connected search request mode, in which search 
> requests will fail when the coordinating node can't communicate with ZooKeeper
> ---
>
> Key: SOLR-12388
> URL: https://issues.apache.org/jira/browse/SOLR-12388
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-12388.patch
>
>
> Right now, a Solr node will return the results of a search request even if it 
> cannot communicate with ZooKeeper at the time it receives the request. This 
> may result in stale or incorrect results if there have been major changes to 
> the collection structure that the node has not been informed of via 
> ZooKeeper.  When this happens, as long as all known shards respond, the 
> response will succeed, and a {{zkConnected}} header set to {{false}} is 
> included in the search response.
> There should be an option to instead fail requests under these conditions, to 
> prevent stale or incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-05-23 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487552#comment-16487552
 ] 

Andrzej Bialecki  commented on SOLR-12392:
--

Thanks Mark - please BadApple for now, I can work on this next week.

> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-05-23 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  reassigned SOLR-12392:


Assignee: Andrzej Bialecki 

> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-11-ea+14) - Build # 609 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/609/
Java: 64bit/jdk-11-ea+14 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

15 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.EchoParamsTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_64AC57C1608D5B2E-001\init-core-data-001:
 java.nio.file.NoSuchFileException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_64AC57C1608D5B2E-001\init-core-data-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_64AC57C1608D5B2E-001\init-core-data-001:
 java.nio.file.NoSuchFileException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_64AC57C1608D5B2E-001\init-core-data-001

at __randomizedtesting.SeedInfo.seed([64AC57C1608D5B2E]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:832)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/33)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10003_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"152709119378841", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10003_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"152709119380088",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr;,   

[jira] [Comment Edited] (LUCENE-8326) More Like This Params Refactor

2018-05-23 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487484#comment-16487484
 ] 

Alessandro Benedetti edited comment on LUCENE-8326 at 5/23/18 3:40 PM:
---

It is very annoying that the patch automatically generated from the Github Pull 
Request ( which is green to merge with the master) actually ends up being 
malformed...

I attach a fixed patch now, just built straight from command line.
 The Pull Request is still valid from a code review perspective.


was (Author: alessandro.benedetti):
It is very annoying that the patch automatically generated from the Github Pull 
Request ( which is green to merge with the master) actually ends up being 
malformed...

I attacha fixed patch now, just built straight from command line.
The Pull Request is still valid from a code review perspective.

> More Like This Params Refactor
> --
>
> Key: LUCENE-8326
> URL: https://issues.apache.org/jira/browse/LUCENE-8326
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8326.patch, LUCENE-8326.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> More Like This ca be refactored to improve the code readability, test 
> coverage and maintenance.
> Scope of this Jira issue is to start the More Like This refactor from the 
> More Like This Params.
> This Jira will not improve the current More Like This but just keep the same 
> functionality with a refactored code.
> Other Jira issues will follow improving the overall code readability, test 
> coverage and maintenance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8326) More Like This Params Refactor

2018-05-23 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487484#comment-16487484
 ] 

Alessandro Benedetti commented on LUCENE-8326:
--

It is very annoying that the patch automatically generated from the Github Pull 
Request ( which is green to merge with the master) actually ends up being 
malformed...

I attacha fixed patch now, just built straight from command line.
The Pull Request is still valid from a code review perspective.

> More Like This Params Refactor
> --
>
> Key: LUCENE-8326
> URL: https://issues.apache.org/jira/browse/LUCENE-8326
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8326.patch, LUCENE-8326.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> More Like This ca be refactored to improve the code readability, test 
> coverage and maintenance.
> Scope of this Jira issue is to start the More Like This refactor from the 
> More Like This Params.
> This Jira will not improve the current More Like This but just keep the same 
> functionality with a refactored code.
> Other Jira issues will follow improving the overall code readability, test 
> coverage and maintenance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8326) More Like This Params Refactor

2018-05-23 Thread Alessandro Benedetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Benedetti updated LUCENE-8326:
-
Attachment: LUCENE-8326.patch

> More Like This Params Refactor
> --
>
> Key: LUCENE-8326
> URL: https://issues.apache.org/jira/browse/LUCENE-8326
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8326.patch, LUCENE-8326.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> More Like This ca be refactored to improve the code readability, test 
> coverage and maintenance.
> Scope of this Jira issue is to start the More Like This refactor from the 
> More Like This Params.
> This Jira will not improve the current More Like This but just keep the same 
> functionality with a refactored code.
> Other Jira issues will follow improving the overall code readability, test 
> coverage and maintenance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-23 Thread mosh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487477#comment-16487477
 ] 

mosh commented on SOLR-12361:
-

I have just uploaded a pull request incorporating child documents as values 
inside SolrInputDocument.
 Perhaps you could check it out and so we can discuss which option is better 
suited out of the two:

1. The pull request: [GitHub Pull Request 
#382|https://github.com/apache/lucene-solr/pull/382]
2. _childDocuments as a Map:
{quote}Perhaps changing the _childDocuments to Map
{quote}

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12388) Enable a strict ZooKeeper-connected search request mode, in which search requests will fail when the coordinating node can't communicate with ZooKeeper

2018-05-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487471#comment-16487471
 ] 

Mark Miller commented on SOLR-12388:


We stop accepting document updates when we realize we lost the connection to 
ZK, but I think Steve refers to cluster changes that can happen and a node 
might have missed out on hearing about.

> Enable a strict ZooKeeper-connected search request mode, in which search 
> requests will fail when the coordinating node can't communicate with ZooKeeper
> ---
>
> Key: SOLR-12388
> URL: https://issues.apache.org/jira/browse/SOLR-12388
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-12388.patch
>
>
> Right now, a Solr node will return the results of a search request even if it 
> cannot communicate with ZooKeeper at the time it receives the request. This 
> may result in stale or incorrect results if there have been major changes to 
> the collection structure that the node has not been informed of via 
> ZooKeeper.  When this happens, as long as all known shards respond, the 
> response will succeed, and a {{zkConnected}} header set to {{false}} is 
> included in the search response.
> There should be an option to instead fail requests under these conditions, to 
> prevent stale or incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487467#comment-16487467
 ] 

ASF subversion and git services commented on SOLR-12358:


Commit d32ce81eab69239b03f4f1b4974aa4a1b19fcd06 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d32ce81 ]

SOLR-12358: Autoscaling suggestions fail randomly with sorting


> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, SOLR-12358.patch, SOLR-12358.patch, 
> SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> 

[GitHub] lucene-solr pull request #382: SOLR-12361

2018-05-23 Thread moshebla
GitHub user moshebla opened a pull request:

https://github.com/apache/lucene-solr/pull/382

SOLR-12361

pass tests but the TestChildDocTransformer.
We have to think of a how we want to change the transformer, which should 
probably be discussed in another issue.
Currently it does not add the _childDocuments_ field to fl, causing the 
childDocuments to be ommited from the response.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/moshebla/lucene-solr SOLR-12361

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/382.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #382


commit 9a55af35e320e0931b7b670b46828d68177a9075
Author: user 
Date:   2018-05-23T14:55:36Z

pass tests but TestChildDocTransformer




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487455#comment-16487455
 ] 

Noble Paul edited comment on SOLR-12294 at 5/23/18 3:19 PM:


cloud can use {{updateProcessorChain}} or {{updateRequestProcessorChain}}

this is an implementation detail about the dynamic class loading

(I have updated the comment)


was (Author: noble.paul):
cloud can use {{updateProcessorChain}} or {{updateRequestProcessorChain}}

this is an implementation detail about the dynamic class loading

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above Exception.
> Sometimes you are lucky and the test1 collection will be initialize after the 
> .system collection. But in ~90% of the time this isn't the case...
> Let me know if you need further details here,
>  
> Johannes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487342#comment-16487342
 ] 

Noble Paul edited comment on SOLR-12294 at 5/23/18 3:19 PM:


sorry, you can't specify a chain like that if you need it to be dynamically 
loaded , however you can specify a request parameter {{processor=testUP}} and 
it will work. remove the {{< updateRequestProcessorChain  >}} definition 
altogether and just specify the {{< updateProcessor >}}


was (Author: noble.paul):
sorry, you can't specify a chain like that , however you can specify a request 
parameter {{processor=testUP}} and it will work. remove the {{< 
updateRequestProcessorChain  >}} definition altogether and just specify the {{< 
updateProcessor >}}

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above Exception.
> Sometimes you are lucky and the test1 collection will be initialize after the 
> .system collection. But in ~90% of the time this isn't the case...
> Let me know if you need further details here,
>  
> Johannes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487455#comment-16487455
 ] 

Noble Paul commented on SOLR-12294:
---

cloud can use {{updateProcessorChain}} or {{updateRequestProcessorChain}}

this is an implementation detail about the dynamic class loading

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above Exception.
> Sometimes you are lucky and the test1 collection will be initialize after the 
> .system collection. But in ~90% of the time this isn't the case...
> Let me know if you need further details here,
>  
> Johannes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Johannes Brucher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487421#comment-16487421
 ] 

Johannes Brucher edited comment on SOLR-12294 at 5/23/18 3:05 PM:
--

[~noble.paul] thank you for your reply!
 After your comment I looked into the Solr Ref guide: 
[https://lucene.apache.org/solr/guide/7_3/update-request-processors.html#update-processors-in-solrcloud]
 Because I was pretty sure that in Cloud you can still use processorChains.

And that is really the case, BUT:

You have to name it "{{{color:#008000}updateProcessorChain{color}}}" and not 
"{{{color:#008000}updateRequestProcessorChain{color}}}"!!!
 That changed the whole behaviour und every time I restarted the Cloud, the 
exception is never thrown again!

I guess using "{{{color:#008000}updateProcessorChain{color}}}" utilize a 
different core init routine and that routine is using the lazy loading 
mechanism correctly.

[~ctargett] Maybe it's worth to highlight the difference between 
"{{{color:#008000}updateProcessorChain{color}}}" and 
"{{{color:#008000}updateRequestProcessorChain{color}}}" in the Ref Guide and 
that it seems that in Cloud you have to use 
"{{{color:#008000}updateProcessorChain{color}}}" to be safe here!

 

Thanks all!

Johannes


was (Author: jb@shi):
[~noble.paul] thank you for your reply!
After your comment I looked into the Solr Ref guide: 
[https://lucene.apache.org/solr/guide/7_3/update-request-processors.html#update-processors-in-solrcloud]
Because I was pretty sure that in Cloud you can still use processorChains.

And that is really the case, BUT:

You have to name it "{{{color:#008000}updateProcessorChain{color}}}" and not 
"{{{color:#008000}updateRequestProcessorChain{color}}}"!!!
That changed the whole behaviour und every time I restarted the Cloud, the 
exception is never thrown again!

I guess using "{{{color:#008000}updateProcessorChain{color}}}" utilize a 
different core init routine and that routine is using the lazy loading 
mechanism correctly.

[~ctargett] Maybe it's worth to highlight the difference between 
"{{{color:#008000}updateProcessorChain{color}}}" and 
"{{{color:#008000}updateRequestProcessorChain{color}}}" in the Ref Guide and 
that it seems that in Cloud you have to use 
"{{{color:#008000}updateProcessorChain{color}}}" to be safe here!

 

Thanks you all!

Johannes

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ 

[jira] [Commented] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Johannes Brucher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487421#comment-16487421
 ] 

Johannes Brucher commented on SOLR-12294:
-

[~noble.paul] thank you for your reply!
After your comment I looked into the Solr Ref guide: 
[https://lucene.apache.org/solr/guide/7_3/update-request-processors.html#update-processors-in-solrcloud]
Because I was pretty sure that in Cloud you can still use processorChains.

And that is really the case, BUT:

You have to name it "{{{color:#008000}updateProcessorChain{color}}}" and not 
"{{{color:#008000}updateRequestProcessorChain{color}}}"!!!
That changed the whole behaviour und every time I restarted the Cloud, the 
exception is never thrown again!

I guess using "{{{color:#008000}updateProcessorChain{color}}}" utilize a 
different core init routine and that routine is using the lazy loading 
mechanism correctly.

[~ctargett] Maybe it's worth to highlight the difference between 
"{{{color:#008000}updateProcessorChain{color}}}" and 
"{{{color:#008000}updateRequestProcessorChain{color}}}" in the Ref Guide and 
that it seems that in Cloud you have to use 
"{{{color:#008000}updateProcessorChain{color}}}" to be safe here!

 

Thanks you all!

Johannes

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above 

[jira] [Comment Edited] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487416#comment-16487416
 ] 

Cassandra Targett edited comment on SOLR-12391 at 5/23/18 3:00 PM:
---

Taking a look at the source for the page, I can see that it basically stops 
loading anything after "Suggest". But in the code that word is supposed to be 
"Suggestions", so it stops half-way through. I cannot tell why, though - the 
HTML looks valid (and the whole page passes a HTML validator):

{code}
Suggestions
{code}

The last time this page ({{solr/webapp/web/index.html}}) was changed was by 
Shalin for SOLR-11648. But that change is in 7.3 and this problem does not 
exist there. It's got to be something else.


was (Author: ctargett):
Taking a look at the source for the page, I can see that it basically stops 
loading anything after "Suggest". But in the code that word is supposed to 
"Suggestions", so it stops half-way through. I cannot tell why, though - the 
HTML looks valid (and the whole page passes a HTML validator):

{code}
Suggestions
{code}

The last time this page ({{solr/webapp/web/index.html}}) was changed was by 
Shalin for SOLR-11648. But that change is in 7.3 and this problem does not 
exist there. It's got to be something else.

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487416#comment-16487416
 ] 

Cassandra Targett commented on SOLR-12391:
--

Taking a look at the source for the page, I can see that it basically stops 
loading anything after "Suggest". But in the code that word is supposed to 
"Suggestions", so it stops half-way through. I cannot tell why, though - the 
HTML looks valid (and the whole page passes a HTML validator):

{code}
Suggestions
{code}

The last time this page ({{solr/webapp/web/index.html}}) was changed was by 
Shalin for SOLR-11648. But that change is in 7.3 and this problem does not 
exist there. It's got to be something else.

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-05-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487411#comment-16487411
 ] 

Mark Miller commented on SOLR-12392:


,This has failed for me lots since it went in locally in and test reports. I'll 
add a @BadApple.

> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >