[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread bidorbuy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237115#comment-16237115
 ] 

bidorbuy commented on SOLR-11078:
-

Just some more feedback - we still need to run a few more tests to get an 
conclusive answer, but on the surface it does look that *Point fields degrade 
performance over Trie* fields with precision steps.

On Solr 6.4.2 we used "primitive" fieldTypes without precisionSteps and we also 
used fieldTypes with precisionSteps - i.e.
{code}


{code}

We used precisionSteps="8" in fields where we could have range queries and we 
used precisionSteps="0" where fields contained an id or enum.

When switching to Solr 7.1.0 we migrated those fieldTypes to *Point-definitions 
- i.e. instead of using "int" and "tint" we created just one definition using 
Point:
{code}

{code}

Last night we reverted back to the 6.4.2 schema-definition using TrieIntFields 
with precisionSteps and initial tests looked better. We are now rebuilding the 
index one more time using Trie-fields with Precision-Step and will run the 
7.1.0 Solr server in parallel to the current 6.4.2 in a loadbalanced fashion to 
get a proper comparison. In preliminary tests it does appear that 7.1.0 using 
Trie-fields and precision-steps is faster than 7.1.0 with Point-fields.

We will also revisit our current schema and the way we build queries/facets to 
perhaps see if we have other issues.

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1508 - Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1508/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testBasics

Error Message:
Error from server at 
http://127.0.0.1:43368/solr/testcollection_shard1_replica_n2: Expected mime 
type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/testcollection_shard1_replica_n2/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:43368/solr/testcollection_shard1_replica_n2: 
Expected mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason:
Can not find: /solr/testcollection_shard1_replica_n2/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([85D9181FA30EA2A4:B801B6339BE0FCD4]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:541)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1008)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:183)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:126)
at 
org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 281 - Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/281/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestTlogReplica

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:92)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:742)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:935)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:844)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1036)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:948)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:497)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:426)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1020)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:844)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1036)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:948)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
  at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 20809 - Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20809/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC 
--illegal-access=deny

3 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream

Error Message:
Could not find collection:workQueue

Stack Trace:
java.lang.AssertionError: Could not find collection:workQueue
at 
__randomizedtesting.SeedInfo.seed([35DD91308F814E6F:171D10CBACEB647F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream(StreamExpressionTest.java:7817)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  

[jira] [Comment Edited] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilit

2017-11-02 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237025#comment-16237025
 ] 

Joel Bernstein edited comment on SOLR-11598 at 11/3/17 3:09 AM:


I don't think the export handler will perform very well with 10 sort fields. 
There is a real tradeoff between the number of sort fields and performance. 
This is also true for the number of fields in the field list. So while it is 
easy to increase the number of sort fields, it will very likely be too slow for 
any kind of production use.


was (Author: joel.bernstein):
I don't think the export handler will perform very well with 10 sort fields. 
There is a real tradeoff between the number of sort fields and performance. 
This is also true for the number of fields in the field list. So while it is 
easy to increase the number sort fields, it will very likely be too slow for 
any kind of production use.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Priority: Major
>  Labels: patch
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)

[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-02 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237025#comment-16237025
 ] 

Joel Bernstein commented on SOLR-11598:
---

I don't think the export handler will perform very well with 10 sort fields. 
There is a real tradeoff between the number of sort fields and performance. 
This is also true for the number of fields in the field list. So while it is 
easy to increase the number sort fields, it will very likely be too slow for 
any kind of production use.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Priority: Major
>  Labels: patch
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 214 - Failure

2017-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/214/

3 tests failed.
FAILED:  org.apache.solr.search.stats.TestExactStatsCache.test

Error Message:
unable to create new native thread

Stack Trace:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at 
org.apache.http.impl.client.IdleConnectionEvictor.start(IdleConnectionEvictor.java:96)
at 
org.apache.http.impl.client.HttpClientBuilder.build(HttpClientBuilder.java:1219)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:287)
at 
org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:209)
at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:47)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:503)
at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139)
at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
at 
org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1596)
at 
org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1659)
at 
org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1316)
at 
org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1145)
at 
org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:367)
at 
org.apache.solr.BaseDistributedSearchTestCase.createJetty(BaseDistributedSearchTestCase.java:424)
at 
org.apache.solr.BaseDistributedSearchTestCase.createJetty(BaseDistributedSearchTestCase.java:396)
at 
org.apache.solr.BaseDistributedSearchTestCase.createServers(BaseDistributedSearchTestCase.java:339)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1017)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
   

[jira] [Resolved] (LUCENE-8007) Require that codecs always store totalTermFreq, sumDocFreq and sumTotalTermFreq

2017-11-02 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-8007.
-
Resolution: Fixed

> Require that codecs always store totalTermFreq, sumDocFreq and 
> sumTotalTermFreq
> ---
>
> Key: LUCENE-8007
> URL: https://issues.apache.org/jira/browse/LUCENE-8007
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: LUCENE-8007.patch, LUCENE-8007.patch, LUCENE-8007.patch, 
> LUCENE-8007.patch, LUCENE-8007.patch
>
>
> Javadocs allow codecs to not store some index statistics. Given discussion 
> that occurred on LUCENE-4100, this was mostly implemented this way to support 
> pre-flex codecs. We should now require that all codecs store these statistics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8007) Require that codecs always store totalTermFreq, sumDocFreq and sumTotalTermFreq

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237022#comment-16237022
 ] 

ASF subversion and git services commented on LUCENE-8007:
-

Commit ca5f9b34576545f1a51cde1526f42de519989527 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ca5f9b3 ]

LUCENE-8007: Make scoring statistics mandatory


> Require that codecs always store totalTermFreq, sumDocFreq and 
> sumTotalTermFreq
> ---
>
> Key: LUCENE-8007
> URL: https://issues.apache.org/jira/browse/LUCENE-8007
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: LUCENE-8007.patch, LUCENE-8007.patch, LUCENE-8007.patch, 
> LUCENE-8007.patch, LUCENE-8007.patch
>
>
> Javadocs allow codecs to not store some index statistics. Given discussion 
> that occurred on LUCENE-4100, this was mostly implemented this way to support 
> pre-flex codecs. We should now require that all codecs store these statistics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-02 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11598:
-
Fix Version/s: (was: 6.6.1)

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Priority: Major
>  Labels: patch
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)

[jira] [Updated] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-02 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-6144:
---
Fix Version/s: 5.5.6
   7.1.1
   6.6.3
   7.0.2

> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.0.2, 7.2, 6.6.3, 7.1.1, 5.5.6
>
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236980#comment-16236980
 ] 

ASF subversion and git services commented on LUCENE-6144:
-

Commit a5069445d166c8862c7656129f58bf577ac25fb1 in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a506944 ]

LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now removes old Ivy jars 
in ~/.ant/lib/.


> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236982#comment-16236982
 ] 

ASF subversion and git services commented on LUCENE-6144:
-

Commit 1f41349718de41bc5817d2262fb657a0436a7bef in lucene-solr's branch 
refs/heads/branch_7_0 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1f41349 ]

LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now removes old Ivy jars 
in ~/.ant/lib/.


> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236981#comment-16236981
 ] 

ASF subversion and git services commented on LUCENE-6144:
-

Commit df874432b9a17b547acb24a01d3491839e6a6b69 in lucene-solr's branch 
refs/heads/branch_6_6 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df87443 ]

LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now removes old Ivy jars 
in ~/.ant/lib/.


> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236983#comment-16236983
 ] 

ASF subversion and git services commented on LUCENE-6144:
-

Commit b9ebeba5a258b30e0a0ae97e7f58144d172a1232 in lucene-solr's branch 
refs/heads/branch_7_1 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b9ebeba ]

LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now removes old Ivy jars 
in ~/.ant/lib/.


> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only

2017-11-02 Thread Aroop (JIRA)
Aroop created SOLR-11600:


 Summary: Add Constructor to SelectStream which takes 
StreamEvaluators as argument. Current schema forces one to enter a stream 
expression string only 
 Key: SOLR-11600
 URL: https://issues.apache.org/jira/browse/SOLR-11600
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ, streaming expressions
Affects Versions: 7.1, 6.6.1
Reporter: Aroop
 Fix For: 7.0, 6.6.1


The use case is to be able able to supply stream evaluators over a rollup 
stream in the following manner, but with instead with Strongly typed objects 
and not steaming-expression strings.


{code:bash}
curl --data-urlencode 'expr=select(
id,
div(sum(cat1_i),sum(cat2_i)) as metric1,
coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as metric2,
rollup(
search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s 
asc"),
over="cat_s",sum(cat1_i),sum(cat2_i)
))' http://localhost:8983/solr/col1/stream
{code}


the current code base does not allow one to provide selectedEvaluators in a 
constructor, so one cannot prepare their select stream via java code:

{code:java}
public class SelectStream extends TupleStream implements Expressible {
private static final long serialVersionUID = 1L;
private TupleStream stream;
private StreamContext streamContext;
private Map selectedFields;
private Map selectedEvaluators;
private List operations;

public SelectStream(TupleStream stream, List selectedFields) throws 
IOException {
this.stream = stream;
this.selectedFields = new HashMap();
Iterator var3 = selectedFields.iterator();

while(var3.hasNext()) {
String selectedField = (String)var3.next();
this.selectedFields.put(selectedField, selectedField);
}

this.operations = new ArrayList();
this.selectedEvaluators = new HashMap();
}

public SelectStream(TupleStream stream, Map selectedFields) 
throws IOException {
this.stream = stream;
this.selectedFields = selectedFields;
this.operations = new ArrayList();
this.selectedEvaluators = new HashMap();
}
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11599) Change normalize function to standardize and make it work with matrices

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236947#comment-16236947
 ] 

ASF subversion and git services commented on SOLR-11599:


Commit 5ed071fd6eee0db7c50d63a4452ce37e6cef0eab in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5ed071f ]

SOLR-11599: Change normalize function to standardize and make it work with 
matrices


> Change normalize function to standardize and make it work with matrices
> ---
>
> Key: SOLR-11599
> URL: https://issues.apache.org/jira/browse/SOLR-11599
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-11599.patch
>
>
> It seems that the *normalize* function, which converts a vector to a mean of 
> 0 with a std of 1, is more typically called *standardize*.  This ticket 
> changes the name of the function and allows it to work on matrices as well as 
> vectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11599) Change normalize function to standardize and make it work with matrices

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236944#comment-16236944
 ] 

ASF subversion and git services commented on SOLR-11599:


Commit 0ebf5e08968c7aa9f13395f9a6d57bbbdc8c6659 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0ebf5e0 ]

SOLR-11599: Change normalize function to standardize and make it work with 
matrices


> Change normalize function to standardize and make it work with matrices
> ---
>
> Key: SOLR-11599
> URL: https://issues.apache.org/jira/browse/SOLR-11599
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-11599.patch
>
>
> It seems that the *normalize* function, which converts a vector to a mean of 
> 0 with a std of 1, is more typically called *standardize*.  This ticket 
> changes the name of the function and allows it to work on matrices as well as 
> vectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-02 Thread Aroop (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aroop updated SOLR-11598:
-
Description: 
I am a user of Streaming and I am currently trying to use rollups on an 10 
dimensional document.
I am unable to get correct results on this query as I am bounded by the 
limitation of the export handler which supports only 4 sort fields.
I do not see why this needs to be the case, as it could very well be 10 or 20.
My current needs would be satisfied with 10, but one would want to ask why 
can't it be any decent integer n, beyond which we know performance degrades, 
but even then it should be caveat emptor.
This is a big limitation for me, as I am working on a feature with a tight 
deadline where I need to support 10 dimensional rollups. I did not read any 
limitation on the sorting in the documentation and we went ahead with the 
installation of 6.6.1. Now we are blocked with this limitation.
This is a Jira to track this work.

[~varunthacker] 

Code Link:
https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455

Error
null:java.io.IOException: A max of 4 sorts can be specified
at 
org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
at 
org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
at 
org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
at 
org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
at 
org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
at 
org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
at 
org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
at 
org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
at 
org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
at 
org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
at 
org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 

[jira] [Updated] (LUCENE-8007) Require that codecs always store totalTermFreq, sumDocFreq and sumTotalTermFreq

2017-11-02 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8007:

Attachment: LUCENE-8007.patch

I removed the != 0 checks in checkIndex and added additional checks across the 
statistics. I also added assert != -1 in places such as 
CompositeReader/TermContext that are summing from subs. Given the value could 
previously be -1, it seems good to have these asserts at least for 8.0

> Require that codecs always store totalTermFreq, sumDocFreq and 
> sumTotalTermFreq
> ---
>
> Key: LUCENE-8007
> URL: https://issues.apache.org/jira/browse/LUCENE-8007
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: LUCENE-8007.patch, LUCENE-8007.patch, LUCENE-8007.patch, 
> LUCENE-8007.patch, LUCENE-8007.patch
>
>
> Javadocs allow codecs to not store some index statistics. Given discussion 
> that occurred on LUCENE-4100, this was mostly implemented this way to support 
> pre-flex codecs. We should now require that all codecs store these statistics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11599) Change normalize function to standardize and make it work with matrices

2017-11-02 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11599:
--
Attachment: SOLR-11599.patch

> Change normalize function to standardize and make it work with matrices
> ---
>
> Key: SOLR-11599
> URL: https://issues.apache.org/jira/browse/SOLR-11599
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-11599.patch
>
>
> It seems that the *normalize* function, which converts a vector to a mean of 
> 0 with a std of 1, is more typically called *standardize*.  This ticket 
> changes the name of the function and allows it to work on matrices as well as 
> vectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11599) Change normalize function to standardize and make it work with matrices

2017-11-02 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11599:
-

 Summary: Change normalize function to standardize and make it work 
with matrices
 Key: SOLR-11599
 URL: https://issues.apache.org/jira/browse/SOLR-11599
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


It seems that the *normalize* function, which converts a vector to a mean of 0 
with a std of 1, is more typically called *standardize*.  This ticket changes 
the name of the function and allows it to work on matrices as well as vectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-02 Thread Aroop (JIRA)
Aroop created SOLR-11598:


 Summary: Export Writer needs to support more than 4 Sort fields - 
Say 10, ideally it should not be bound at all, but 4 seems to really short sell 
the StreamRollup capabilities.
 Key: SOLR-11598
 URL: https://issues.apache.org/jira/browse/SOLR-11598
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: streaming expressions
Affects Versions: 7.0, 6.6.1
Reporter: Aroop
Priority: Major
 Fix For: 6.6.1


I am user of Streaming and I am currently trying to use rollups on an 10 
dimensional document.
I am unable to get correct results on this query as I am bounded by the 
limitation of the export handler which supports only 4 sort fields.
I do not see why this needs to be the case, as it could very well be 10 or 20.
My current needs would be satisfied with 10, but one would want to ask why 
can't it be any decent integer n, beyond which we know performance degrades, 
but even then it should be caveat emptor.
This is big limitation for me, as I am working on a feature with a tight 
deadline where I need to support 10 dimensional rollups. I did not read any 
limitation on the sorting in the documentation and we went ahead with the 
installation of 6.6.1. Now we are blocked with this limitation.
This is a Jira to track this work.

[~varunthacker] 

Code Link:
https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455

Error
null:java.io.IOException: A max of 4 sorts can be specified
at 
org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
at 
org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
at 
org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
at 
org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
at 
org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
at 
org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
at 
org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
at 
org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
at 
org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
at 
org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
at 
org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at 

Re: [JENKINS] Solr-reference-guide-master - Build # 2994 - Failure

2017-11-02 Thread Steve Rowe
Hi Cassandra,

> On Nov 2, 2017, at 8:23 PM, Cassandra Targett  wrote:
> 
> I'm pretty sure none of the commits I did this afternoon caused this,
> but possibly the changes for LUCENE-6144 did?

Yes, I caused it with LUCENE-6144.  I’ve now run the ‘Lucene-Ivy-Bootstrap’ 
job on each of the ASF nodes that run Lucene/Solr jobs (lucene, lucene2, H19, 
H20), and on all four Policeman Jenkins VBOX, though ivy-2.3.0.jar can’t be 
removed from the Windows VBOX and so is blocking further builds there.

> Also, I noticed the branch_7x Ref Guide build hasn't run in 1.5 days,
> which is unrelated to this, I think, but warrants investigation.

This morning on bui...@apache.org, Chris Thistlethwaite wrote:

> Jenkins had to be restarted again at 14:53 UTC as it was not queuing 
> builds. We're continuing to investigate and troubleshoot this issue.

--
Steve
www.lucidworks.com
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11503) Collections created with legacyCloud=true cannot be opened if legacyCloud=false

2017-11-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236888#comment-16236888
 ] 

Erick Erickson commented on SOLR-11503:
---

Since there are no objections I'll commit this tomorrow. I'll remove the 
nocommit (just verified that the error still occurs if I comment the fixes 
out), and I'll add a CHANGES.txt before putting up the final patch. Final patch 
will have no code changes, just comment changes.

> Collections created with legacyCloud=true cannot be opened if 
> legacyCloud=false
> ---
>
> Key: SOLR-11503
> URL: https://issues.apache.org/jira/browse/SOLR-11503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Critical
> Attachments: SOLR-11503.patch, SOLR-11503.patch, SOLR-11503.patch
>
>
> SOLR-11122 introduced a bug starting with 6.6.1 that means if you create a 
> collection with legacyCloud=true then switch to legacyCloud=false, you get an 
> NPE because coreNodeName is not defined in core.properties.
> Since the default for legacyCloud changed from true to false between 6.6.1 
> and 7.x, this means that any attempt to upgrade Solr with existing 
> collections *created with Solr 6.6.1 or 6.6.2 will fail* if the default value 
> for legacyCloud is used in both. Collections created with 6.6.0 would work. 
> Collections created in 6.6.1 or 6.6.2 with legacyCloud=false will work.
> This is not as egregious with any collections *created with 7.0* since if the 
> default legacyCloud=false is present when the core is created, properties are 
> persisted with coreNodeName. However, if someone switches legacyCloud to 
> true, then creates a collection, then changes legacyCloud back to false then 
> they'll hit this even in 7.0+
> This happened because bit of reordering switched the order of the calls 
> below. coreNodeName is added to the descriptor in 
> create/createFromDescriptor(this, cd) via zkContgroller.preRegister so 
> coresLocator.create(this, cd) persists core.properties without coreNodeName.
> _original order_
> SolrCore core = createFromDescriptor(cd, true, newCollection);
> coresLocator.create(this, cd);
> (NOTE: private calls to create were renamed to createFromDescriptor in 
> SOLR-11122).
> I've got a fix in the works for creating cores, I'll attach a preliminary 
> patch w/o tests in a bit for discussion, but the question is really what to 
> do about 6.6.1 and 6.6.2 and 7.1 for that matter. 
> This is compounded by the fact that with the CVE, there's strong incentive to 
> move to 6.6.2. sgh.
> There are two parts to fixing this completely:
> 1> create core.properties correctly
> 2> deal with coreNodeName not being in the core.properties file by going to 
> ZK and getting it (and persisting it). Haven't worked that part out yet 
> though, not in the first patch. Note one point here if it works as I hope it 
> will update the core.properties files first time they're opened.
> Options that I see, there are really two parts:
> *part1 create the core.properties correctly*
> > Release 6.6.3, and/or 7.1.1 with this fix. This still leaves 7.0 a problem.
> > Recommend people not install 7x over collections created with 6x until they 
> > have a version with fixes (7.1.1? 7.2?). Switching legacyCloud values and 
> > creating collections is at your own risk.
> > Recommend that people change legacyCloud=true in 7.x until they start 
> > working with a fixed version, which one TBD.
> *part2 deal with coreNodeName not being in the core.properties*
> > Not backport and release with 7.2? set legacyCloud=true until then.
> > Backport to point releases like 7.1.1? 6.6.3?
> > and what about 7.0? I don't think many people will be affected by 7.0 since 
> > 7.1 came out so soon after. And setting legacyCloud=true will let people 
> > get by.
> Fixing the two parts is not a question, they both need to be fixed. The real 
> question is whether we need to create a point release that incorporates one 
> or both or whether saying "you must set legacyCloud=true prior to Solr 
> version 7.# in order to work with any collections created with Solr versions 
> 6.6.1 through 7.#".
> Let's hear opinions..



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 282 - Still Failing!

2017-11-02 Thread Steve Rowe
Hi Uwe,

After upgrading the Lucene/Solr Ivy version to 2.4.0 on master and branch_7x, I 
created jobs on Policeman Jenkins to run ‘ant ivy-bootstrap’ for each VBOX and 
then ran them.  However, the 2.3.0 jar can’t be deleted from the Windows VBOX 
for some reason, so further Windows builds will be broken until the 2.3.0 jar 
is removed.  

Can you please remove C:\Users\jenkins\.ant\lib\ivy-2.3.0.jar from the Windows 
VBOX?

Thanks.

--
Steve
www.lucidworks.com

> On Nov 2, 2017, at 8:10 PM, Policeman Jenkins Server  
> wrote:
> 
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/282/
> Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC
> 
> No tests ran.
> 
> Build Log:
> [...truncated 31 lines...]
> BUILD FAILED
> C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\build.xml:826: The 
> following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\common-build.xml:435:
>  The following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\common-build.xml:446:
>  Found disallowed Ivy jar(s): C:\Users\jenkins\.ant\lib\ivy-2.3.0.jar
> 
> Total time: 0 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
> were found. Configuration error?
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Solr-reference-guide-master - Build # 2994 - Failure

2017-11-02 Thread Cassandra Targett
I'm pretty sure none of the commits I did this afternoon caused this,
but possibly the changes for LUCENE-6144 did?

Also, I noticed the branch_7x Ref Guide build hasn't run in 1.5 days,
which is unrelated to this, I think, but warrants investigation.

On Thu, Nov 2, 2017 at 6:04 PM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Solr-reference-guide-master/2994/
>
> Log:
> Started by timer
> [EnvInject] - Loading node environment variables.
> Building remotely on H19 (git-websites) in workspace 
> /home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
>  > git rev-parse --is-inside-work-tree # timeout=10
> Fetching changes from the remote Git repository
>  > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
> timeout=10
> Cleaning workspace
>  > git rev-parse --verify HEAD # timeout=10
> Resetting working tree
>  > git reset --hard # timeout=10
>  > git clean -fdx # timeout=10
> Fetching upstream changes from git://git.apache.org/lucene-solr.git
>  > git --version # timeout=10
>  > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
> +refs/heads/*:refs/remotes/origin/*
>  > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
>  > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
> Checking out Revision 19db1df81a18e6eb2cce5be973bf2305d606a9f8 
> (refs/remotes/origin/master)
>  > git config core.sparsecheckout # timeout=10
>  > git checkout -f 19db1df81a18e6eb2cce5be973bf2305d606a9f8
> Commit message: "LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now 
> removes old Ivy jars in ~/.ant/lib/."
>  > git rev-list 42cc1c07f70af071b2ec7643b7e6c2d7e82cac81 # timeout=10
> No emails were triggered.
> [Solr-reference-guide-master] $ /usr/bin/env bash 
> /tmp/jenkins6785087023254441381.sh
> + set -e
> + RVM_PATH=/home/jenkins/.rvm
> + RUBY_VERSION=ruby-2.3.3
> + GEMSET=solr-refguide-gemset
> + curl -sSL https://get.rvm.io
> + bash -s -- --ignore-dotfiles stable
> Turning on ignore dotfiles mode.
> Downloading https://github.com/rvm/rvm/archive/1.29.3.tar.gz
> Downloading 
> https://github.com/rvm/rvm/releases/download/1.29.3/1.29.3.tar.gz.asc
> gpg: Signature made Sun 10 Sep 2017 08:59:21 PM UTC using RSA key ID BF04FF17
> gpg: Good signature from "Michal Papis (RVM signing) "
> gpg: aka "Michal Papis "
> gpg: aka "[jpeg image of size 5015]"
> gpg: WARNING: This key is not certified with a trusted signature!
> gpg:  There is no indication that the signature belongs to the owner.
> Primary key fingerprint: 409B 6B17 96C2 7546 2A17  0311 3804 BB82 D39D C0E3
>  Subkey fingerprint: 62C9 E5F4 DA30 0D94 AC36  166B E206 C29F BF04 FF17
> GPG verified '/home/jenkins/.rvm/archives/rvm-1.29.3.tgz'
>
> Upgrading the RVM installation in /home/jenkins/.rvm/
> Upgrade of RVM in /home/jenkins/.rvm/ is complete.
>
> Upgrade Notes:
>
>   * No new notes to display.
>
> + set +x
> Running 'source /home/jenkins/.rvm/scripts/rvm'
> Running 'rvm autolibs disable'
> Running 'rvm install ruby-2.3.3'
> Already installed ruby-2.3.3.
> To reinstall use:
>
> rvm reinstall ruby-2.3.3
>
> Running 'rvm gemset create solr-refguide-gemset'
> ruby-2.3.3 - #gemset created 
> /home/jenkins/.rvm/gems/ruby-2.3.3@solr-refguide-gemset
> ruby-2.3.3 - #generating solr-refguide-gemset wrappers
> Running 'rvm ruby-2.3.3@solr-refguide-gemset'
> Using /home/jenkins/.rvm/gems/ruby-2.3.3 with gemset solr-refguide-gemset
> Running 'gem install --force --version 3.5.0 jekyll'
> Successfully installed jekyll-3.5.0
> Parsing documentation for jekyll-3.5.0
> Done installing documentation for jekyll after 2 seconds
> 1 gem installed
> Running 'gem install --force --version 2.1.0 jekyll-asciidoc'
> Successfully installed jekyll-asciidoc-2.1.0
> Parsing documentation for jekyll-asciidoc-2.1.0
> Done installing documentation for jekyll-asciidoc after 0 seconds
> 1 gem installed
> Running 'gem install --force --version 1.1.2 pygments.rb'
> Successfully installed pygments.rb-1.1.2
> Parsing documentation for pygments.rb-1.1.2
> Done installing documentation for pygments.rb after 0 seconds
> 1 gem installed
> + ant clean build-site build-pdf
> Buildfile: 
> /home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/solr-ref-guide/build.xml
>
> clean:
>
> build-init:
> [mkdir] Created dir: 
> /home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
>  [echo] Copying all examples to build directory...
>  [copy] Copying 1 file to 
> /home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
>  [echo] Copying all non template files from src ...
>  [copy] Copying 377 files to 
> /home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
>  [echo] Copy (w/prop replacement) any template files 

[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 282 - Still Failing!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/282/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 31 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\build.xml:826: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\common-build.xml:435: 
The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\common-build.xml:446: 
Found disallowed Ivy jar(s): C:\Users\jenkins\.ant\lib\ivy-2.3.0.jar

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 6994 - Still Failing!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6994/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 29 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\build.xml:826: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\common-build.xml:435:
 The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\common-build.xml:446:
 Found disallowed Ivy jar(s): C:\Users\jenkins\.ant\lib\ivy-2.3.0.jar

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Solr-reference-guide-master - Build # 2995 - Still Failing

2017-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/2995/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H19 (git-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 19db1df81a18e6eb2cce5be973bf2305d606a9f8 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 19db1df81a18e6eb2cce5be973bf2305d606a9f8
Commit message: "LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now 
removes old Ivy jars in ~/.ant/lib/."
 > git rev-list 19db1df81a18e6eb2cce5be973bf2305d606a9f8 # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /usr/bin/env bash 
/tmp/jenkins5627434945598243305.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.3.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.3/1.29.3.tar.gz.asc
gpg: Signature made Sun 10 Sep 2017 08:59:21 PM UTC using RSA key ID BF04FF17
gpg: Good signature from "Michal Papis (RVM signing) "
gpg: aka "Michal Papis "
gpg: aka "[jpeg image of size 5015]"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the owner.
Primary key fingerprint: 409B 6B17 96C2 7546 2A17  0311 3804 BB82 D39D C0E3
 Subkey fingerprint: 62C9 E5F4 DA30 0D94 AC36  166B E206 C29F BF04 FF17
GPG verified '/home/jenkins/.rvm/archives/rvm-1.29.3.tgz'

Upgrading the RVM installation in /home/jenkins/.rvm/
Upgrade of RVM in /home/jenkins/.rvm/ is complete.

Upgrade Notes:

  * No new notes to display.

+ set +x
Running 'source /home/jenkins/.rvm/scripts/rvm'
Running 'rvm autolibs disable'
Running 'rvm install ruby-2.3.3'
Already installed ruby-2.3.3.
To reinstall use:

rvm reinstall ruby-2.3.3

Running 'rvm gemset create solr-refguide-gemset'
ruby-2.3.3 - #gemset created 
/home/jenkins/.rvm/gems/ruby-2.3.3@solr-refguide-gemset
ruby-2.3.3 - #generating solr-refguide-gemset wrappers
Running 'rvm ruby-2.3.3@solr-refguide-gemset'
Using /home/jenkins/.rvm/gems/ruby-2.3.3 with gemset solr-refguide-gemset
Running 'gem install --force --version 3.5.0 jekyll'
Successfully installed jekyll-3.5.0
Parsing documentation for jekyll-3.5.0
Done installing documentation for jekyll after 2 seconds
1 gem installed
Running 'gem install --force --version 2.1.0 jekyll-asciidoc'
Successfully installed jekyll-asciidoc-2.1.0
Parsing documentation for jekyll-asciidoc-2.1.0
Done installing documentation for jekyll-asciidoc after 0 seconds
1 gem installed
Running 'gem install --force --version 1.1.2 pygments.rb'
Successfully installed pygments.rb-1.1.2
Parsing documentation for pygments.rb-1.1.2
Done installing documentation for pygments.rb after 0 seconds
1 gem installed
+ ant clean build-site build-pdf
Buildfile: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/solr-ref-guide/build.xml

clean:

build-init:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copying all examples to build directory...
 [copy] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copying all non template files from src ...
 [copy] Copying 377 files to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copy (w/prop replacement) any template files from src...
 [copy] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content

ivy-availability-check:

-ivy-fail-disallowed-ivy-version:
 [echo] Please delete the following disallowed Ivy jar(s): 
/home/jenkins/.ant/lib/ivy-2.3.0.jar

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/lucene/common-build.xml:435:
 The following error occurred while executing this line:

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 729 - Failure!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/729/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
--illegal-access=deny

No tests ran.

Build Log:
[...truncated 31 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:826: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:435: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:446: 
Found disallowed Ivy jar(s): /home/jenkins/.ant/lib/ivy-2.3.0.jar

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 20807 - Failure!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20807/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC 
--illegal-access=deny

No tests ran.

Build Log:
[...truncated 29 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:826: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:435: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:446: 
Found disallowed Ivy jar(s): /home/jenkins/.ant/lib/ivy-2.3.0.jar

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 728 - Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/728/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
--illegal-access=deny

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
http://127.0.0.1:35405/solr/awhollynewcollection_0_shard1_replica_n2: 
ClusterState says we are the leader 
(http://127.0.0.1:35405/solr/awhollynewcollection_0_shard1_replica_n2), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:35405/solr/awhollynewcollection_0_shard1_replica_n2: 
ClusterState says we are the leader 
(http://127.0.0.1:35405/solr/awhollynewcollection_0_shard1_replica_n2), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([6F44C9B8309C204C:2731BD0C36AF0FD9]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:541)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1008)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:937)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:183)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:459)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236794#comment-16236794
 ] 

Yonik Seeley commented on SOLR-11078:
-

bq. Maybe we should update recommendations instead so that id fields would use 
StrField while true numbers would use point fields?

That would break a few things that expect numeric order (faceting tiebreaks, 
etc).  There's no reason not to just use the trie-field method of manipulating 
the value such that the lexical order is equal to the numeric order.  Range 
queries would then also still work and people could choose if the BKD overhead 
is worth it for better range query performance.  For some usecases with smaller 
numbers of unique numeric values, just using the postings will still be faster 
anyway. 

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236792#comment-16236792
 ] 

Yonik Seeley commented on SOLR-11078:
-

bq. . Some indexes we haven't reverted use Point for boosting only.

You only need docValues for that (so index should probably be "false").   When 
index=true, point fields are added to a BKD index and trie fields add to a 
full-text index.

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 276 - Still Failing!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/276/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 39 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/build.xml:826: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:435:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:446:
 Found disallowed Ivy jar(s): /export/home/jenkins/.ant/lib/ivy-2.3.0.jar

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1507 - Still Failing!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1507/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 37 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:826: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/common-build.xml:435:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/common-build.xml:446:
 Found disallowed Ivy jar(s): /export/home/jenkins/.ant/lib/ivy-2.3.0.jar

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 280 - Failure!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/280/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC 
--illegal-access=deny

No tests ran.

Build Log:
[...truncated 35 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/build.xml:826: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/common-build.xml:435: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/common-build.xml:446: 
Found disallowed Ivy jar(s): /Users/jenkins/.ant/lib/ivy-2.3.0.jar

Total time: 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4264 - Failure!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4264/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 33 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:826: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/common-build.xml:435: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/common-build.xml:446: 
Found disallowed Ivy jar(s): /Users/jenkins/.ant/lib/ivy-2.3.0.jar

Total time: 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 281 - Failure!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/281/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 31 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\build.xml:826: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\common-build.xml:435: 
The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\common-build.xml:446: 
Found disallowed Ivy jar(s): C:\Users\jenkins\.ant\lib\ivy-2.3.0.jar

Total time: 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 6993 - Failure!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6993/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

No tests ran.

Build Log:
[...truncated 29 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\build.xml:826: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\common-build.xml:435:
 The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\common-build.xml:446:
 Found disallowed Ivy jar(s): C:\Users\jenkins\.ant\lib\ivy-2.3.0.jar

Total time: 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Solr-reference-guide-master - Build # 2994 - Failure

2017-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/2994/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H19 (git-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 19db1df81a18e6eb2cce5be973bf2305d606a9f8 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 19db1df81a18e6eb2cce5be973bf2305d606a9f8
Commit message: "LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now 
removes old Ivy jars in ~/.ant/lib/."
 > git rev-list 42cc1c07f70af071b2ec7643b7e6c2d7e82cac81 # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /usr/bin/env bash 
/tmp/jenkins6785087023254441381.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.3.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.3/1.29.3.tar.gz.asc
gpg: Signature made Sun 10 Sep 2017 08:59:21 PM UTC using RSA key ID BF04FF17
gpg: Good signature from "Michal Papis (RVM signing) "
gpg: aka "Michal Papis "
gpg: aka "[jpeg image of size 5015]"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the owner.
Primary key fingerprint: 409B 6B17 96C2 7546 2A17  0311 3804 BB82 D39D C0E3
 Subkey fingerprint: 62C9 E5F4 DA30 0D94 AC36  166B E206 C29F BF04 FF17
GPG verified '/home/jenkins/.rvm/archives/rvm-1.29.3.tgz'

Upgrading the RVM installation in /home/jenkins/.rvm/
Upgrade of RVM in /home/jenkins/.rvm/ is complete.

Upgrade Notes:

  * No new notes to display.

+ set +x
Running 'source /home/jenkins/.rvm/scripts/rvm'
Running 'rvm autolibs disable'
Running 'rvm install ruby-2.3.3'
Already installed ruby-2.3.3.
To reinstall use:

rvm reinstall ruby-2.3.3

Running 'rvm gemset create solr-refguide-gemset'
ruby-2.3.3 - #gemset created 
/home/jenkins/.rvm/gems/ruby-2.3.3@solr-refguide-gemset
ruby-2.3.3 - #generating solr-refguide-gemset wrappers
Running 'rvm ruby-2.3.3@solr-refguide-gemset'
Using /home/jenkins/.rvm/gems/ruby-2.3.3 with gemset solr-refguide-gemset
Running 'gem install --force --version 3.5.0 jekyll'
Successfully installed jekyll-3.5.0
Parsing documentation for jekyll-3.5.0
Done installing documentation for jekyll after 2 seconds
1 gem installed
Running 'gem install --force --version 2.1.0 jekyll-asciidoc'
Successfully installed jekyll-asciidoc-2.1.0
Parsing documentation for jekyll-asciidoc-2.1.0
Done installing documentation for jekyll-asciidoc after 0 seconds
1 gem installed
Running 'gem install --force --version 1.1.2 pygments.rb'
Successfully installed pygments.rb-1.1.2
Parsing documentation for pygments.rb-1.1.2
Done installing documentation for pygments.rb after 0 seconds
1 gem installed
+ ant clean build-site build-pdf
Buildfile: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/solr-ref-guide/build.xml

clean:

build-init:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copying all examples to build directory...
 [copy] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copying all non template files from src ...
 [copy] Copying 377 files to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copy (w/prop replacement) any template files from src...
 [copy] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content

ivy-availability-check:

-ivy-fail-disallowed-ivy-version:
 [echo] Please delete the following disallowed Ivy jar(s): 
/home/jenkins/.ant/lib/ivy-2.3.0.jar

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/lucene/common-build.xml:435:
 The following error occurred while executing this line:

[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2017-11-02 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236750#comment-16236750
 ] 

Varun Thacker commented on SOLR-8599:
-

Hi Dennis,

In ConnectionManager do we intend to wait for 1 or 5 seconds? Just wanted to 
clarify and maybe fix it in another Jira 
{code}
  log.info("Could not connect due to error, sleeping for 5s and trying 
agian");
  waitSleep(1000);
{code}

> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
>Priority: Major
> Fix For: 5.5.1, 6.0
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-02 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-6144.

   Resolution: Fixed
 Assignee: Steve Rowe  (was: Shawn Heisey)
Fix Version/s: 7.2
   master (8.0)

> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1506 - Failure!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1506/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 37 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:826: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/common-build.xml:435:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/common-build.xml:446:
 Found disallowed Ivy jar(s): /export/home/jenkins/.ant/lib/ivy-2.3.0.jar

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 275 - Failure!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/275/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 39 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/build.xml:826: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:435:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:446:
 Found disallowed Ivy jar(s): /export/home/jenkins/.ant/lib/ivy-2.3.0.jar

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236707#comment-16236707
 ] 

ASF subversion and git services commented on LUCENE-6144:
-

Commit 503a6b775fc0df801521d8b54655d53625403fef in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=503a6b7 ]

LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now removes old Ivy jars 
in ~/.ant/lib/.


> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236706#comment-16236706
 ] 

ASF subversion and git services commented on SOLR-11423:


Commit 0637407ea4bf3a49ed5bdabadcfee650a8e0a200 in lucene-solr's branch 
refs/heads/master from [~dragonsinth]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0637407 ]

SOLR-11423: fix typo


> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>Priority: Major
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[NOTICE] Lucene/Solr build: Ivy version upgrade

2017-11-02 Thread Steve Rowe
I just pushed LUCENE-6144 to master and branch_7x, and as a result the 
supported Ivy version has changed from 2.3.0 to 2.4.0.

Run ‘ant ivy-bootstrap’ to download the new jar to, and delete the old jar 
from, ~/.ant/lib/.

--
Steve
www.lucidworks.com
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236709#comment-16236709
 ] 

ASF subversion and git services commented on LUCENE-6144:
-

Commit 19db1df81a18e6eb2cce5be973bf2305d606a9f8 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=19db1df ]

LUCENE-6144: Upgrade Ivy to 2.4.0; 'ant ivy-bootstrap' now removes old Ivy jars 
in ~/.ant/lib/.


> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-11-02 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236685#comment-16236685
 ] 

Scott Blum commented on SOLR-11423:
---

Oh wow, nice catch.  How'd I mess that one up?

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>Priority: Major
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11205) Make arbitrary metrics values available for policies

2017-11-02 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11205.
---
   Resolution: Fixed
Fix Version/s: 7.1

> Make arbitrary metrics values available for policies
> 
>
> Key: SOLR-11205
> URL: https://issues.apache.org/jira/browse/SOLR-11205
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.1
>
>
> Any metric available in the metrics API should be available for policy 
> configurations.
> Example;
> {code}
> {'replica': 0, 'metrics:solr.jvm/os.systemLoadAverage': '<0.5'}
> {code}
> So the syntax to use a metric is:
> {code}
> metrics:/
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11409) A ref guide page on setting up solr on aws

2017-11-02 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236605#comment-16236605
 ] 

Cassandra Targett commented on SOLR-11409:
--

[~sarkaramr...@gmail.com]: I'm reviewing this new page, and noted one 
discrepancy I'm not clear about. At the top of the page, you write that the 
quick start is not meant for production systems, but in the Appendix you note 
that for production one should have a ZK ensemble of at least 3 nodes. 

Is it recommended to _never_ deploy a production system to EC2? Or you could do 
it if you put an ensemble into place?

IOW, are there reasons other than ZK why someone should avoid deploying a 
production Solr to EC2? 

> A ref guide page on setting up solr on aws
> --
>
> Key: SOLR-11409
> URL: https://issues.apache.org/jira/browse/SOLR-11409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11409.patch, quick-start-aws-key.png, 
> quick-start-aws-security-1.png, quick-start-aws-security-2.png
>
>
> It will be nice if we have a dedicated page on installing solr on aws . 
> At the end we could even link to 
> http://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11595) optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields

2017-11-02 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236584#comment-16236584
 ] 

David Smiley commented on SOLR-11595:
-

Hmm; true.  This could be pretty simple -- a ConcurrentHashMap of String field 
to CollectionStatistics.  Does that sound like something committable to you?  I 
can create a new issue (for Lucene) and a patch with this approach.

> optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields
> --
>
> Key: SOLR-11595
> URL: https://issues.apache.org/jira/browse/SOLR-11595
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_11595_optimize_SolrIndexSearcher_collectionStatistics.patch
>
>
> {{SolrIndexSearcher.localCollectionStatistics(field)}} simply calls Lucene's 
> {{IndexSearcher.collectionStatistics(field)}} which in turn calls 
> {{MultiFields.getTerms(reader, field)}}.  Profiling in an app with many 150 
> fields in the query shows that building the MultiTerms here is expensive.  
> Fortunately it turns out that Solr already has a cached instance via 
> {{SlowCompositeReaderWrapper}} (using MultiFields which has a 
> ConcurrentHashMap to the MultiTerms keyed by field String.
> Perhaps this should be improved on the Lucene side... not sure.  But here on 
> the Solr side, the solution is straight-forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4263 - Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4263/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

23 tests failed.
FAILED:  org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration

Error Message:
java.io.IOException: Couldn't instantiate 
org.apache.zookeeper.ClientCnxnSocketNIO

Stack Trace:
org.apache.solr.common.SolrException: java.io.IOException: Couldn't instantiate 
org.apache.zookeeper.ClientCnxnSocketNIO
at 
__randomizedtesting.SeedInfo.seed([292390D85C967211:97A8F67725EC7C24]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:167)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:111)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:98)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForAllNodes(MiniSolrCloudCluster.java:268)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:262)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.cloud.ClusterStateUpdateTest.setUp(ClusterStateUpdateTest.java:46)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:968)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2147 - Still Unstable

2017-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2147/

3 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([2A165684D6478801:A242695E78BBE5F9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 280 - Still Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/280/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.analysis.snowball.TestSnowballVocab

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_92F614BEFEF8DCEF-001\tempDir-017\italian:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_92F614BEFEF8DCEF-001\tempDir-017\italian

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_92F614BEFEF8DCEF-001\tempDir-017:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_92F614BEFEF8DCEF-001\tempDir-017
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_92F614BEFEF8DCEF-001\tempDir-017\italian:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_92F614BEFEF8DCEF-001\tempDir-017\italian
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_92F614BEFEF8DCEF-001\tempDir-017:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_92F614BEFEF8DCEF-001\tempDir-017

at __randomizedtesting.SeedInfo.seed([92F614BEFEF8DCEF]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMultiMMap

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7B5784C2FD3F8AB0-001\testSeekSliceZero-009:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7B5784C2FD3F8AB0-001\testSeekSliceZero-009
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7B5784C2FD3F8AB0-001\testSeekSliceZero-009:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7B5784C2FD3F8AB0-001\testSeekSliceZero-009

at __randomizedtesting.SeedInfo.seed([7B5784C2FD3F8AB0]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (SOLR-11146) Analytics Component 2.0 Bug Fixes

2017-11-02 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-11146:
--
Fix Version/s: master (8.0)

> Analytics Component 2.0 Bug Fixes
> -
>
> Key: SOLR-11146
> URL: https://issues.apache.org/jira/browse/SOLR-11146
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Dennis Gove
>Priority: Blocker
> Fix For: 7.2, master (8.0)
>
>
> The new Analytics Component has several small bugs in mapping functions and 
> other places. This ticket is a fix for a large number of them. This patch 
> should allow all unit tests created in SOLR-11145 to pass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11145) Comprehensive Unit Tests for the Analytics Component

2017-11-02 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-11145:
--
Fix Version/s: master (8.0)

> Comprehensive Unit Tests for the Analytics Component
> 
>
> Key: SOLR-11145
> URL: https://issues.apache.org/jira/browse/SOLR-11145
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Dennis Gove
>Priority: Critical
> Fix For: 7.2, master (8.0)
>
>
> Adding comprehensive unit tests for the new Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #269: Added parameter sections for analytics facets...

2017-11-02 Thread HoustonPutman
Github user HoustonPutman closed the pull request at:

https://github.com/apache/lucene-solr/pull/269


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #269: Added parameter sections for analytics facets.

2017-11-02 Thread HoustonPutman
Github user HoustonPutman commented on the issue:

https://github.com/apache/lucene-solr/pull/269
  
Changes merged and added alongside all of the analytics documentation in 
[this 
commit.](https://github.com/apache/lucene-solr/commit/6428ddb10e975f8b8955aebb9b59e5de201face0)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11144) Analytics Component Documentation

2017-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236450#comment-16236450
 ] 

ASF GitHub Bot commented on SOLR-11144:
---

Github user HoustonPutman closed the pull request at:

https://github.com/apache/lucene-solr/pull/229


> Analytics Component Documentation
> -
>
> Key: SOLR-11144
> URL: https://issues.apache.org/jira/browse/SOLR-11144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Cassandra Targett
>Priority: Critical
> Fix For: 7.2, master (8.0)
>
>
> Adding a Solr Reference Guide page for the Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #229: SOLR-11144: Initial version of the analytics compone...

2017-11-02 Thread HoustonPutman
Github user HoustonPutman commented on the issue:

https://github.com/apache/lucene-solr/pull/229
  
Analytics reference merged in [this 
commit.](https://github.com/apache/lucene-solr/commit/6428ddb10e975f8b8955aebb9b59e5de201face0)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11144) Analytics Component Documentation

2017-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236449#comment-16236449
 ] 

ASF GitHub Bot commented on SOLR-11144:
---

Github user HoustonPutman commented on the issue:

https://github.com/apache/lucene-solr/pull/229
  
Analytics reference merged in [this 
commit.](https://github.com/apache/lucene-solr/commit/6428ddb10e975f8b8955aebb9b59e5de201face0)


> Analytics Component Documentation
> -
>
> Key: SOLR-11144
> URL: https://issues.apache.org/jira/browse/SOLR-11144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Cassandra Targett
>Priority: Critical
> Fix For: 7.2, master (8.0)
>
>
> Adding a Solr Reference Guide page for the Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #229: SOLR-11144: Initial version of the analytics ...

2017-11-02 Thread HoustonPutman
Github user HoustonPutman closed the pull request at:

https://github.com/apache/lucene-solr/pull/229


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11144) Analytics Component Documentation

2017-11-02 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236432#comment-16236432
 ] 

Cassandra Targett commented on SOLR-11144:
--

I did neglect to add the PR #s to my commit message to close them - if you 
wouldn't mind closing them yourself, Houston, that would be great.

> Analytics Component Documentation
> -
>
> Key: SOLR-11144
> URL: https://issues.apache.org/jira/browse/SOLR-11144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Cassandra Targett
>Priority: Critical
> Fix For: 7.2, master (8.0)
>
>
> Adding a Solr Reference Guide page for the Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11590) Synchronize ZK connect/disconnect handling

2017-11-02 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236419#comment-16236419
 ] 

Varun Thacker commented on SOLR-11590:
--

SOLR-6261 is another Jira that's relevant here. We added a thread pool to 
execute the watch event callbacks

> Synchronize ZK connect/disconnect handling
> --
>
> Key: SOLR-11590
> URL: https://issues.apache.org/jira/browse/SOLR-11590
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> Here is a sequence of 2 disconnects and re-connects
> {code}
> 1. 2017-10-31T08:34:23.106-0700 Watcher 
> org.apache.solr.common.cloud.ConnectionManager@1579ca20 
> name:ZooKeeperConnection Watcher:host:port got event WatchedEvent 
> state:Disconnected type:None path:null path:null type:None
> 2. 2017-10-31T08:34:23.106-0700 zkClient has disconnected
> 3. 2017-10-31T08:34:23.107-0700 Watcher 
> org.apache.solr.common.cloud.ConnectionManager@1579ca20 
> name:ZooKeeperConnection Watcher:host:port got event WatchedEvent 
> state:SyncConnected type:None path:null path:null type:None
> {code}
> {code}
> 1. 2017-10-31T08:36:46.541-0700 Watcher 
> org.apache.solr.common.cloud.ConnectionManager@1579ca20 
> name:ZooKeeperConnection Watcher:host:port got event WatchedEvent 
> state:Disconnected type:None path:null path:null type:None
> 2. 2017-10-31T08:36:46.549-0700 Watcher 
> org.apache.solr.common.cloud.ConnectionManager@1579ca20 
> name:ZooKeeperConnection Watcher:host:port got event WatchedEvent 
> state:SyncConnected type:None path:null path:null type:None
> 2. 2017-10-31T08:36:46.563-0700 zkClient has disconnected
> {code}
> In the first disconnect the sequence is -  get disconnect watcher, execute 
> disconnect code, execute connect code
> In the second disconnect the sequence is - get disconnect watcher, execute 
> connect code, execute disconnect code
> In the second sequence of events, if the JVM has leader replicas then all 
> updates start failing with "Cannot talk to ZooKeeper - Updates are disabled." 
> . This starts happening exactly after 27 seconds ( zk client timeout is 30s , 
> 90% of 30 = 27 - when the code thinks the session is likely expired). No 
> leadership changes since there was no session expiry. Unless you restart the 
> node all updates to the system continue to fail.
> These log lines correspond are from Solr 5.3 hence where the WatchedEvent was 
> still being logged as INFO
> We process the connect code and then process the disconnect code out of order 
> based on the log ordering. The connection is active but the flag is not set 
> and hence after 27 seconds {{zkCheck}} starts complaining that the connection 
> is likely expired
> A related Jira is SOLR-5721
> ZK gives us ordered watch events ( 
> https://zookeeper.apache.org/doc/r3.4.8/zookeeperProgrammers.html#sc_WatchGuarantees
>  ) but from what I understand Solr can still process them out of order. We 
> could take a lock and synchronize {{ConnectionManager#connected}} and 
> {{ConnectionManager#disconnected}} . 
> Would that be the right approach to take?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-11-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236399#comment-16236399
 ] 

Tomás Fernández Löbbe commented on SOLR-11423:
--

Thanks for adding this [~dragonsinth]. One question
{code:java}
// Allow this client to push up to 1% of the remaining queue capacity without 
rechecking.
offerPermits.set(maxQueueSize - stat.getNumChildren() / 100);
{code}
Don't you want {{offerPermits.set((maxQueueSize - stat.getNumChildren()) / 
100);}}, or maybe {{offerPermits.set(remainingCapacity / 100);}}

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>Priority: Major
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20805 - Still Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20805/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testUpdateStream

Error Message:
Error from server at http://127.0.0.1:38195/solr: Could not fully create 
collection: destinationCollection

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38195/solr: Could not fully create collection: 
destinationCollection
at 
__randomizedtesting.SeedInfo.seed([E030C84481650261:28770C3E886DFF31]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1096)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:183)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:200)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testUpdateStream(StreamExpressionTest.java:3958)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 212 - Still Failing

2017-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/212/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeMarkersRegistration

Error Message:
Path /autoscaling/nodeAdded/127.0.0.1:52425_solr wasn't created

Stack Trace:
java.lang.AssertionError: Path /autoscaling/nodeAdded/127.0.0.1:52425_solr 
wasn't created
at 
__randomizedtesting.SeedInfo.seed([7C8F5B58BD09AB84:6435D354B33C666B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeMarkersRegistration(TriggerIntegrationTest.java:909)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.core.HdfsDirectoryFactoryTest.testInitArgsOrSysPropConfig

Error Message:
The max direct memory is likely too low.  Either increase it (by adding 
-XX:MaxDirectMemorySize=g -XX:+UseLargePages to your containers 

[jira] [Resolved] (SOLR-11144) Analytics Component Documentation

2017-11-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11144.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

Big thank you, [~houstonputman], for writing these docs and then being so 
patient waiting for the review. 

> Analytics Component Documentation
> -
>
> Key: SOLR-11144
> URL: https://issues.apache.org/jira/browse/SOLR-11144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Cassandra Targett
>Priority: Critical
> Fix For: 7.2, master (8.0)
>
>
> Adding a Solr Reference Guide page for the Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11597) Implement RankNet.

2017-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236325#comment-16236325
 ] 

ASF GitHub Bot commented on SOLR-11597:
---

GitHub user airalcorn2 opened a pull request:

https://github.com/apache/lucene-solr/pull/270

Implement RankNet [SOLR-11597].

As described in [this tutorial](https://github.com/airalcorn2/Solr-LTR).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/airalcorn2/lucene-solr ranknet

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/270.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #270


commit 24f11aeb7f7f5ba6a2c9e96c959ffba87fff885a
Author: Michael A. Alcorn 
Date:   2017-11-02T17:38:09Z

Implement RankNet.

commit fe1c9c99a455ca6928c637dc6b2c20341668b9ab
Author: Michael A. Alcorn 
Date:   2017-11-02T17:39:45Z

Wording.

commit d752b243e4dad968e862d71d3f3b1bf34ab94aa2
Author: Michael A. Alcorn 
Date:   2017-11-02T18:15:07Z

Fix explain.




> Implement RankNet.
> --
>
> Key: SOLR-11597
> URL: https://issues.apache.org/jira/browse/SOLR-11597
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael A. Alcorn
>Priority: Normal
>
> Implement RankNet as described in [this 
> tutorial|https://github.com/airalcorn2/Solr-LTR].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #270: Implement RankNet [SOLR-11597].

2017-11-02 Thread airalcorn2
GitHub user airalcorn2 opened a pull request:

https://github.com/apache/lucene-solr/pull/270

Implement RankNet [SOLR-11597].

As described in [this tutorial](https://github.com/airalcorn2/Solr-LTR).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/airalcorn2/lucene-solr ranknet

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/270.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #270


commit 24f11aeb7f7f5ba6a2c9e96c959ffba87fff885a
Author: Michael A. Alcorn 
Date:   2017-11-02T17:38:09Z

Implement RankNet.

commit fe1c9c99a455ca6928c637dc6b2c20341668b9ab
Author: Michael A. Alcorn 
Date:   2017-11-02T17:39:45Z

Wording.

commit d752b243e4dad968e862d71d3f3b1bf34ab94aa2
Author: Michael A. Alcorn 
Date:   2017-11-02T18:15:07Z

Fix explain.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11597) Implement RankNet.

2017-11-02 Thread Michael A. Alcorn (JIRA)
Michael A. Alcorn created SOLR-11597:


 Summary: Implement RankNet.
 Key: SOLR-11597
 URL: https://issues.apache.org/jira/browse/SOLR-11597
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Michael A. Alcorn
Priority: Normal


Implement RankNet as described in [this 
tutorial|https://github.com/airalcorn2/Solr-LTR].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11595) optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields

2017-11-02 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236318#comment-16236318
 ] 

Adrien Grand commented on SOLR-11595:
-

Should we fix it in Lucene? We don't really need to create a MultiTerms 
instance, just to aggregate statistics?

> optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields
> --
>
> Key: SOLR-11595
> URL: https://issues.apache.org/jira/browse/SOLR-11595
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_11595_optimize_SolrIndexSearcher_collectionStatistics.patch
>
>
> {{SolrIndexSearcher.localCollectionStatistics(field)}} simply calls Lucene's 
> {{IndexSearcher.collectionStatistics(field)}} which in turn calls 
> {{MultiFields.getTerms(reader, field)}}.  Profiling in an app with many 150 
> fields in the query shows that building the MultiTerms here is expensive.  
> Fortunately it turns out that Solr already has a cached instance via 
> {{SlowCompositeReaderWrapper}} (using MultiFields which has a 
> ConcurrentHashMap to the MultiTerms keyed by field String.
> Perhaps this should be improved on the Lucene side... not sure.  But here on 
> the Solr side, the solution is straight-forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11144) Analytics Component Documentation

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236315#comment-16236315
 ] 

ASF subversion and git services commented on SOLR-11144:


Commit ebcf7c7acfd49d29193cb44fa6f1759f0d156658 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ebcf7c7 ]

SOLR-11144: Add Analytics Component docs to the Ref Guide


> Analytics Component Documentation
> -
>
> Key: SOLR-11144
> URL: https://issues.apache.org/jira/browse/SOLR-11144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Cassandra Targett
>Priority: Critical
>
> Adding a Solr Reference Guide page for the Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11144) Analytics Component Documentation

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236314#comment-16236314
 ] 

ASF subversion and git services commented on SOLR-11144:


Commit 6428ddb10e975f8b8955aebb9b59e5de201face0 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6428ddb ]

SOLR-11144: Add Analytics Component docs to the Ref Guide


> Analytics Component Documentation
> -
>
> Key: SOLR-11144
> URL: https://issues.apache.org/jira/browse/SOLR-11144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Cassandra Targett
>Priority: Critical
>
> Adding a Solr Reference Guide page for the Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236299#comment-16236299
 ] 

Adrien Grand commented on SOLR-11078:
-

Adding points and terms to the same field would work, but also hurt indexing 
throughput and increase disk usage.

Maybe we should update recommendations instead so that id fields would use 
StrField while true numbers would use point fields?

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236296#comment-16236296
 ] 

Shawn Heisey commented on SOLR-11078:
-

Adding the full-precision terms to increase performance and capability is a 
good idea, but we do need to think about the situation where somebody has an 
existing index with a point field that is upgraded to a new release with this 
capability.  Anything reliant on the new data won't work correctly on that 
index.  I would hope for any related error messages to not be cryptic, so that 
it will be clear that they need to reindex.


> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236266#comment-16236266
 ] 

ASF subversion and git services commented on SOLR-10680:


Commit 2136fc5ecdb90b24391306a8126bda9e545e8dcf in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2136fc5 ]

SOLR-10680: Add minMaxScale Stream Evaluator


> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-10680.patch, SOLR-10680.patch
>
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range. Default to min=0, max=1.
> Syntax:
> {code}
> a = minMaxScale(colA, 0, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 6992 - Still Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6992/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testSearchRate

Error Message:
{bar=[TestEvent{timestamp=150964077015400, config={   
"trigger":"node_added_trigger",   "stage":[ "FAILED", "SUCCEEDED"],   
"afterAction":"test",   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":[ "test", "test1"]}, stage=BEFORE_ACTION, 
actionName='test', event={   "id":"14f35247bbca4880T38pxvh7oe4apm90c9rly0onvz", 
  "source":"node_added_trigger",   "eventTime":150964076814600,   
"eventType":"NODEADDED",   "properties":{ 
"eventTimes":[150964076814600], "_enqueue_time_":150964077015100,   
  "nodeNames":["127.0.0.1:59468_solr"]}}, message='null'}, 
TestEvent{timestamp=150964077015400, config={   
"trigger":"node_added_trigger",   "stage":[ "FAILED", "SUCCEEDED"],   
"afterAction":"test",   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":[ "test", "test1"]}, stage=AFTER_ACTION, 
actionName='test', event={   "id":"14f35247bbca4880T38pxvh7oe4apm90c9rly0onvz", 
  "source":"node_added_trigger",   "eventTime":150964076814600,   
"eventType":"NODEADDED",   "properties":{ 
"eventTimes":[150964076814600], "_enqueue_time_":150964077015100,   
  "nodeNames":["127.0.0.1:59468_solr"]}}, message='null'}, 
TestEvent{timestamp=150964077015400, config={   
"trigger":"node_added_trigger",   "stage":[ "FAILED", "SUCCEEDED"],   
"afterAction":"test",   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":[ "test", "test1"]}, stage=BEFORE_ACTION, 
actionName='test1', event={   
"id":"14f35247bbca4880T38pxvh7oe4apm90c9rly0onvz",   
"source":"node_added_trigger",   "eventTime":150964076814600,   
"eventType":"NODEADDED",   "properties":{ 
"eventTimes":[150964076814600], "_enqueue_time_":150964077015100,   
  "nodeNames":["127.0.0.1:59468_solr"]}}, message='null'}, 
TestEvent{timestamp=150964077015400, config={   
"trigger":"node_added_trigger",   "stage":[ "FAILED", "SUCCEEDED"],   
"afterAction":"test",   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":[ "test", "test1"]}, stage=FAILED, 
actionName='test1', event={   
"id":"14f35247bbca4880T38pxvh7oe4apm90c9rly0onvz",   
"source":"node_added_trigger",   "eventTime":150964076814600,   
"eventType":"NODEADDED",   "properties":{ 
"eventTimes":[150964076814600], "_enqueue_time_":150964077015100,   
  "nodeNames":["127.0.0.1:59468_solr"]}}, message='null'}], 
foo=[TestEvent{timestamp=150964077015200, config={   
"trigger":"node_added_trigger",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "test", "test1"],   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":"test"}, stage=STARTED, actionName='null', event={   
"id":"14f35247bbca4880T38pxvh7oe4apm90c9rly0onvz",   
"source":"node_added_trigger",   "eventTime":150964076814600,   
"eventType":"NODEADDED",   "properties":{ 
"eventTimes":[150964076814600], "_enqueue_time_":150964077015100,   
  "nodeNames":["127.0.0.1:59468_solr"]}}, message='null'}, 
TestEvent{timestamp=150964077015400, config={   
"trigger":"node_added_trigger",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "test", "test1"],   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":"test"}, stage=BEFORE_ACTION, actionName='test', event={   
"id":"14f35247bbca4880T38pxvh7oe4apm90c9rly0onvz",   
"source":"node_added_trigger",   "eventTime":150964076814600,   
"eventType":"NODEADDED",   "properties":{ 
"eventTimes":[150964076814600], "_enqueue_time_":150964077015100,   
  "nodeNames":["127.0.0.1:59468_solr"]}}, message='null'}, 
TestEvent{timestamp=150964077015400, config={   
"trigger":"node_added_trigger",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "test", "test1"],   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":"test"}, stage=AFTER_ACTION, actionName='test', event={   
"id":"14f35247bbca4880T38pxvh7oe4apm90c9rly0onvz",   
"source":"node_added_trigger",   "eventTime":150964076814600,   
"eventType":"NODEADDED",   "properties":{ 
"eventTimes":[150964076814600], "_enqueue_time_":150964077015100,   
  "nodeNames":["127.0.0.1:59468_solr"]}}, message='null'}, 
TestEvent{timestamp=150964077015400, config={   
"trigger":"node_added_trigger",   "stage":[  

[jira] [Commented] (LUCENE-8017) FunctionRangeQuery and FunctionMatchQuery can pollute the QueryCache

2017-11-02 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236257#comment-16236257
 ] 

Adrien Grand commented on LUCENE-8017:
--

Indeed, only caching at the segment level is supported.

bq. // no point cacheing this! (in MatchNoDocsQuery)

For similar reasons as MatchAllDocsQuery I think we should return the core 
cache helper? For instance say the query is A OR B OR C and C rewrites to a 
MatchNoDocsQuery, it would still make sense to cache the BooleanQuery?

> FunctionRangeQuery and FunctionMatchQuery can pollute the QueryCache
> 
>
> Key: LUCENE-8017
> URL: https://issues.apache.org/jira/browse/LUCENE-8017
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8017.patch, LUCENE-8017.patch
>
>
> The QueryCache assumes that queries will return the same set of documents 
> when run over the same segment, independent of all other segments held by the 
> parent IndexSearcher.  However, both FunctionRangeQuery and 
> FunctionMatchQuery can select hits based on score, which depend on term 
> statistics over the whole index, and could therefore theoretically return 
> different result sets on a given segment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236256#comment-16236256
 ] 

ASF subversion and git services commented on SOLR-10680:


Commit e915078707ee859ce4e3ec53398d2986be30aef0 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e915078 ]

SOLR-10680: Add minMaxScale Stream Evaluator


> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-10680.patch, SOLR-10680.patch
>
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range. Default to min=0, max=1.
> Syntax:
> {code}
> a = minMaxScale(colA, 0, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8017) FunctionRangeQuery and FunctionMatchQuery can pollute the QueryCache

2017-11-02 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236244#comment-16236244
 ] 

Alan Woodward commented on LUCENE-8017:
---

bq. getReaderHelper does not mean caching on the top-level reader

So there's no way of caching on the top-level reader at the moment?  Fair 
enough, I'll remove the calls to getReaderHelper and replace them with null for 
now.  We can improve the cacheing of ValuesSource-related queries in a follow 
up.

> FunctionRangeQuery and FunctionMatchQuery can pollute the QueryCache
> 
>
> Key: LUCENE-8017
> URL: https://issues.apache.org/jira/browse/LUCENE-8017
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8017.patch, LUCENE-8017.patch
>
>
> The QueryCache assumes that queries will return the same set of documents 
> when run over the same segment, independent of all other segments held by the 
> parent IndexSearcher.  However, both FunctionRangeQuery and 
> FunctionMatchQuery can select hits based on score, which depend on term 
> statistics over the whole index, and could therefore theoretically return 
> different result sets on a given segment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236242#comment-16236242
 ] 

Shawn Heisey commented on SOLR-11078:
-

bq.  So what we might want to do is index full-precision terms in Solr 
PointField base class so that it has both.

If it doesn't destroy the advanatages, I like that idea.  That would also make 
point fields usable as uniqueKey, wouldn't it?


> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8017) FunctionRangeQuery and FunctionMatchQuery can pollute the QueryCache

2017-11-02 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236168#comment-16236168
 ] 

Adrien Grand commented on LUCENE-8017:
--

bq. reader-specific, because it uses global stats

In case there is confusion, getReaderHelper does not mean caching on the 
top-level reader, it means taking deletes and dv updates into account (I got a 
bit confused since you mentioned top-level statistics).

I agree there are use-cases for caching on the core+dv-updates+deletes key, but 
we have no way to know whether it is safe to do. While caching on a core is ok 
due to the fact that segments get merged less and less often as they get 
bigger, caching on deletes and dv updates is problematic: if there is a 
constant stream of updates, there would very little reuse. This wouldn't be a 
big deal with a regular cache, but the query cache has the unusual property 
that caching a clause of a query can take 10x longer than running the query 
(think eg. of a selective query and a filter that matches most of the index). 
This makes caching dangerous if reuse of cache entries is not likely. And 
{{FunctionMatchQuery}} is a worst-case scenario since it requires to perform a 
linear scan of the documents of the segment.

If we want to start caching on the core+dv-updates+deletes key, we should find 
a way to make sure that the index is mostly static so that cache entries would 
be reused.

> FunctionRangeQuery and FunctionMatchQuery can pollute the QueryCache
> 
>
> Key: LUCENE-8017
> URL: https://issues.apache.org/jira/browse/LUCENE-8017
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8017.patch, LUCENE-8017.patch
>
>
> The QueryCache assumes that queries will return the same set of documents 
> when run over the same segment, independent of all other segments held by the 
> parent IndexSearcher.  However, both FunctionRangeQuery and 
> FunctionMatchQuery can select hits based on score, which depend on term 
> statistics over the whole index, and could therefore theoretically return 
> different result sets on a given segment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-02 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10680:
--
Attachment: SOLR-10680.patch

> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-10680.patch, SOLR-10680.patch
>
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range. Default to min=0, max=1.
> Syntax:
> {code}
> a = minMaxScale(colA, 0, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236150#comment-16236150
 ] 

Markus Jelsma commented on SOLR-11078:
--

Hello David, 

The particular index we reverted does range queries, facets, json-facets and 
value look-ups. Some indexes we haven't reverted use Point for boosting only. 
Another index we won't move to Point does facets.

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 726 - Still Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/726/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testCooldown

Error Message:
[TestEvent{timestamp=150964005045800, 
config=org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig$TriggerListenerConfig@78c1893,
 stage=SUCCEEDED, actionName='null', event={   
"id":"14f351a066872c00T51eizgsu5msabk9rrxwh7x6vf",   
"source":"node_added_cooldown_trigger",   "eventTime":150964004945600,   
"eventType":"NODEADDED",   "properties":{ 
"eventTimes":[150964004945600], "_enqueue_time_":150964005045700,   
  "nodeNames":["127.0.0.1:42101_solr"]}}, message='null'}]

Stack Trace:
java.lang.AssertionError: [TestEvent{timestamp=150964005045800, 
config=org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig$TriggerListenerConfig@78c1893,
 stage=SUCCEEDED, actionName='null', event={
  "id":"14f351a066872c00T51eizgsu5msabk9rrxwh7x6vf",
  "source":"node_added_cooldown_trigger",
  "eventTime":150964004945600,
  "eventType":"NODEADDED",
  "properties":{
"eventTimes":[150964004945600],
"_enqueue_time_":150964005045700,
"nodeNames":["127.0.0.1:42101_solr"]}}, message='null'}]
at 
__randomizedtesting.SeedInfo.seed([56D8A1145BACAB4F:6766CCF02506DEBD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testCooldown(TriggerIntegrationTest.java:1200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  

[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236123#comment-16236123
 ] 

David Smiley commented on SOLR-11078:
-

bq. Possibly dumb question: Can a term be added to Point fields so they have 
the full capability that Trie has

Not a dumb question Shawn!  From what I can see in Lucene Field & 
DefaultIndexingChain classes, it appears it's okay to have a field that both 
has a terms index and a points index, just as it's okay for it to have other 
things (docValues, etc.).  So what we might want to do is index full-precision 
terms in Solr PointField base class so that it has both. Non-range queries 
would use the terms index. Certain faceting features, like facet.method=enum 
could then leverage the terms index as well.  I think this is a good idea!  Of 
course it should be toggle-able.  _I'll file an issue if people like this 
idea._  I wonder how much total overhead there is in adding the terms index.

bq. David Smiley Any chance you could add that to the docs? I think that would 
close this JIRA, no?

Docs: Ooookay.  Lets hear back from [~t...@bidorbuy.co.za] or [~markus17] on 
what their queries were like before closing the issue so that we can be sure we 
know what the cause is.

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11072) Implement trigger for searchRate event type

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236084#comment-16236084
 ] 

ASF subversion and git services commented on SOLR-11072:


Commit f48bfaea4f51e9c77655cff270233f160c4934ac in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f48bfae ]

SOLR-11072: Make the new test more robust to side-effects from other tests in 
the suite.


> Implement trigger for searchRate event type
> ---
>
> Key: SOLR-11072
> URL: https://issues.apache.org/jira/browse/SOLR-11072
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11072.patch, SOLR-11072.patch
>
>
> Implement support for triggers based on searchRate event type. This can be 
> used to add replicas when the rate of queries increases over a threshold and 
> remains high for a configurable period of time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11072) Implement trigger for searchRate event type

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236083#comment-16236083
 ] 

ASF subversion and git services commented on SOLR-11072:


Commit 9c8ff0883a2c6adb7fa584a9c3c11a1233734f5b in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9c8ff08 ]

SOLR-11072: Fix creation of searchRate triggers via API, add a unit test, other 
minor edits.


> Implement trigger for searchRate event type
> ---
>
> Key: SOLR-11072
> URL: https://issues.apache.org/jira/browse/SOLR-11072
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11072.patch, SOLR-11072.patch
>
>
> Implement support for triggers based on searchRate event type. This can be 
> used to add replicas when the rate of queries increases over a threshold and 
> remains high for a configurable period of time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11072) Implement trigger for searchRate event type

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236082#comment-16236082
 ] 

ASF subversion and git services commented on SOLR-11072:


Commit dc6119b85d24750523c1bd41e9d7f70923ce03be in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dc6119b ]

SOLR-11072: Make the new test more robust to side-effects from other tests in 
the suite.


> Implement trigger for searchRate event type
> ---
>
> Key: SOLR-11072
> URL: https://issues.apache.org/jira/browse/SOLR-11072
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11072.patch, SOLR-11072.patch
>
>
> Implement support for triggers based on searchRate event type. This can be 
> used to add replicas when the rate of queries increases over a threshold and 
> remains high for a configurable period of time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8032) Test failed: RandomGeoShapeRelationshipTest.testRandomContains

2017-11-02 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236042#comment-16236042
 ] 

Karl Wright commented on LUCENE-8032:
-

[~ivera], the factory method is called in your test:

{code}
pointList.add(new GeoPoint(PlanetModel.WGS84, 0.06776123345311073, 
-0.7752474170087745));
pointList.add(new GeoPoint(PlanetModel.WGS84, 
0.11666112095362069,-0.8228149925456804));
pointList.add(new GeoPoint(PlanetModel.WGS84, 0.08767696070608244, 
-0.9145966780640845));
GeoPolygon polygon = GeoPolygonFactory.makeGeoPolygon(PlanetModel.WGS84, 
pointList);
{code}

This should construct a GeoConcavePolygon if warranted, no?


> Test failed: RandomGeoShapeRelationshipTest.testRandomContains
> --
>
> Key: LUCENE-8032
> URL: https://issues.apache.org/jira/browse/LUCENE-8032
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: David Smiley
>Assignee: Karl Wright
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.2
>
> Attachments: LUCENE-8032.patch
>
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20800/
> {noformat}
> Error Message:
> geoAreaShape: GeoExactCircle: {planetmodel=PlanetModel.WGS84, 
> center=[lat=-0.00871130560892533, 
> lon=2.3029626482941588([X=-0.6692047265792528, Y=0.7445316825911176, 
> Z=-0.008720939756154669])], radius=3.038428918538668(174.0891533827647), 
> accuracy=2.01444186927E-4} shape: GeoRectangle: 
> {planetmodel=PlanetModel.WGS84, 
> toplat=0.18851664435052304(10.801208089253723), 
> bottomlat=-1.4896034997154073(-85.34799368160976), 
> leftlon=-1.4970589804391838(-85.7751612613233), 
> rightlon=1.346321571653886(77.13854392318753)} expected:<0> but was:<2>
> Stack Trace:
> java.lang.AssertionError: geoAreaShape: GeoExactCircle: 
> {planetmodel=PlanetModel.WGS84, center=[lat=-0.00871130560892533, 
> lon=2.3029626482941588([X=-0.6692047265792528, Y=0.7445316825911176, 
> Z=-0.008720939756154669])], radius=3.038428918538668(174.0891533827647), 
> accuracy=2.01444186927E-4}
> shape: GeoRectangle: {planetmodel=PlanetModel.WGS84, 
> toplat=0.18851664435052304(10.801208089253723), 
> bottomlat=-1.4896034997154073(-85.34799368160976), 
> leftlon=-1.4970589804391838(-85.7751612613233), 
> rightlon=1.346321571653886(77.13854392318753)} expected:<0> but was:<2>
> at 
> __randomizedtesting.SeedInfo.seed([87612C9805977C6F:B087E212A0C8DB25]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at 
> org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest.testRandomContains(RandomGeoShapeRelationshipTest.java:225)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236036#comment-16236036
 ] 

Yonik Seeley commented on SOLR-11078:
-

Seems like deprecation of trie fields may have been premature and we should 
rethink when to deprecate / remove them given that people are (and will 
continue to) use them in the 7x line.

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236026#comment-16236026
 ] 

Erick Erickson commented on SOLR-11078:
---

[~dsmiley] Any chance you could add that to the docs? I think that would close 
this JIRA, no?


> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8019) Add a root failure cause to Explanation

2017-11-02 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16236007#comment-16236007
 ] 

Michael McCandless commented on LUCENE-8019:


bq. So – are you suggesting it would score every sub-query independently?

Yeah, I think so?  This should ensure that if a child scorer is not on the 
current hit, that means it did not match the current hit.  But hopefully it can 
compute the same scores that BQ's more advanced scorers do?  Or maybe because 
of order-of-operation differences this is too hard or something?

> Add a root failure cause to Explanation 
> 
>
> Key: LUCENE-8019
> URL: https://issues.apache.org/jira/browse/LUCENE-8019
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Mike Sokolov
>Priority: Major
> Attachments: LUCENE-8019.patch, LUCENE_8019.patch
>
>
> If you need to analyze the root cause of a query's failure to match some 
> document, you can use the Weight.explain() API. If you want to do some gross 
> analysis of a whole batch of queries, say scraped from a log, that once 
> matched, but no longer do, perhaps after some refactoring or other 
> large-scale change, the Explanation isn't very good for that. You can try 
> parsing its textual output, which is pretty regular, but instead I found it 
> convenient to add some boolean structure to Explanation, and use that to find 
> failing leaves on the Explanation tree, and report only those.
> This patch adds a "condition" to each Explanation, which can be REQUIRED, 
> OPTIONAL, PROHIBITED, or NONE. The conditions correspond in obvious ways to 
> the Boolean Occur, except for NONE, which is used to indicate a node which 
> can't be further decomposed. It adds new Explanation construction methods for 
> creating Explanations with conditions (defaulting to NONE with the existing 
> methods).
> Finally Explanation.getFailureCauses() returns a list of Strings that are the 
> one-line explanations of the failing queries that, if some of them had 
> succeeded, would have made the original overall query match.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-02 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235991#comment-16235991
 ] 

Shawn Heisey commented on SOLR-11078:
-

Possibly dumb question:  Can a term be added to Point fields so they have the 
full capability that Trie has, or would doing that destroy the advantages that 
led Lucene to deprecate all other numeric types?


> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20804 - Still Unstable!

2017-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20804/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.servlet.HttpSolrCallGetCoreTest

Error Message:
Error from server at https://127.0.0.1:36255/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:36255/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([AA28E36B9A89E8A0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1096)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:183)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:200)
at 
org.apache.solr.servlet.HttpSolrCallGetCoreTest.setupCluster(HttpSolrCallGetCoreTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([AA28E36B9A89E8A0:BE60B83EB98E55BE]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[jira] [Commented] (SOLR-11072) Implement trigger for searchRate event type

2017-11-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235955#comment-16235955
 ] 

ASF subversion and git services commented on SOLR-11072:


Commit 6008b186a247a7860223df877e7c7965a68a18a7 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6008b18 ]

SOLR-11072: Fix creation of searchRate triggers via API, add a unit test, other 
minor edits.


> Implement trigger for searchRate event type
> ---
>
> Key: SOLR-11072
> URL: https://issues.apache.org/jira/browse/SOLR-11072
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11072.patch, SOLR-11072.patch
>
>
> Implement support for triggers based on searchRate event type. This can be 
> used to add replicas when the rate of queries increases over a threshold and 
> remains high for a configurable period of time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >