[JENKINS] Lucene-Solr-NightlyTests-7.3 - Build # 26 - Failure

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.3/26/

5 tests failed.
FAILED:  org.apache.solr.cloud.AssignBackwardCompatibilityTest.test

Error Message:
Error from server at http://127.0.0.1:46622/solr: Could not find collection : 
collection1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:46622/solr: Could not find collection : 
collection1
at 
__randomizedtesting.SeedInfo.seed([5301A379075BCB62:DB559CA3A9A7A69A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1105)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:885)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AssignBackwardCompatibilityTest.test(AssignBackwardCompatibilityTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-12297) Create a good SolrClient for SolrCloud paving the way for async requests, HTTP2, multiplexing, and the latest & greatest Jetty features.

2018-05-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475280#comment-16475280
 ] 

Mark Miller edited comment on SOLR-12297 at 5/15/18 5:28 AM:
-

This is going to be a bit of a slow burn, I just kind of hack on it here and 
there when I have the time, but just to start measuring some progress:

Currently approx 40-50 of the 806 core tests don't pass with everything forced 
to full Http2 and usage of the new Http2SolrClient.

I've got the new V2 API stuff working it seems.
 Proxying to remote replicas is basically working.

However, minimally still to do:

TODO: Finish client side SSL support
 TODO: All tests passing
 TODO: Tune settings / streaming payloads vs not, etc
 TODO: Configure shipping Jetty
 TODO: Serve HTTP2 and HTTP on same port
 TODO: Multiple content streams with Http2SolrClient
 TODO: Finish and finalize APIs, especially Async API
 TODO: Performance and scale testing
TODO: Basic Auth and Kerberos / Security support
TODO: Special HttpClientUtil stuff like lifecycle injectors

I havn't focused on just extracting the client for http 1.1 use yet because I'm 
learning more and more about what needs to happen with it as I work through 
many of the outstanding issues.


was (Author: markrmil...@gmail.com):
This is going to be a bit of a slow burn, I just kind of hack on it here and 
there when I have the time, but just to start measuring some progress:

Currently approx 40-50 of the 806 core tests don't pass with everything forced 
to full Http2 and usage of the new Http2SolrClient.

I've got the new V2 API stuff working it seems.
 Proxying to remote replicas is basically working.

However, minimally still to do:

TODO: Finish client side SSL support
 TODO: All tests passing
 TODO: Tune settings / streaming payloads vs not, etc
 TODO: Configure shipping Jetty
 TODO: Serve HTTP2 and HTTP on same port
 TODO: Multiple content streams with Http2SolrClient
 TODO: Finish and finalize APIs, especially Async API
TODO: Performance and scale testing

I havn't focused on just extracting the client for http 1.1 use yet because I'm 
learning more and more about what needs to happen with it as I work through 
many of the outstanding issues.

> Create a good SolrClient for SolrCloud paving the way for async requests, 
> HTTP2, multiplexing, and the latest & greatest Jetty features.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Create a good SolrClient for SolrCloud paving the way for async requests, HTTP2, multiplexing, and the latest & greatest Jetty features.

2018-05-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475280#comment-16475280
 ] 

Mark Miller commented on SOLR-12297:


This is going to be a bit of a slow burn, I just kind of hack on it here and 
there when I have the time, but just to start measuring some progress:

Currently approx 40-50 of the 806 core tests don't pass with everything forced 
to full Http2 and usage of the new Http2SolrClient.

I've got the new V2 API stuff working it seems.
 Proxying to remote replicas is basically working.

However, minimally still to do:

TODO: Finish client side SSL support
 TODO: All tests passing
 TODO: Tune settings / streaming payloads vs not, etc
 TODO: Configure shipping Jetty
 TODO: Serve HTTP2 and HTTP on same port
 TODO: Multiple content streams with Http2SolrClient
 TODO: Finish and finalize APIs, especially Async API
TODO: Performance and scale testing

I havn't focused on just extracting the client for http 1.1 use yet because I'm 
learning more and more about what needs to happen with it as I work through 
many of the outstanding issues.

> Create a good SolrClient for SolrCloud paving the way for async requests, 
> HTTP2, multiplexing, and the latest & greatest Jetty features.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+5) - Build # 7315 - Still Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7315/
Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseSerialGC

37 tests failed.
FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testMaximumResultsForSuggest

Error Message:
Directory 
(MMapDirectory@C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.SpellCheckComponentTest_723FAA10273F101-002\init-core-data-001\spellchecker1
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@7669f5ef) still has 
pending deleted files; cannot initialize IndexWriter

Stack Trace:
java.lang.IllegalArgumentException: Directory 
(MMapDirectory@C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.SpellCheckComponentTest_723FAA10273F101-002\init-core-data-001\spellchecker1
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@7669f5ef) still has 
pending deleted files; cannot initialize IndexWriter
at 
__randomizedtesting.SeedInfo.seed([723FAA10273F101:8B2E7A62CC5FAE4C]:0)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:699)
at 
org.apache.lucene.search.spell.SpellChecker.clearIndex(SpellChecker.java:455)
at 
org.apache.solr.spelling.IndexBasedSpellChecker.build(IndexBasedSpellChecker.java:87)
at 
org.apache.solr.handler.component.SpellCheckComponent.prepare(SpellCheckComponent.java:128)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2510)
at org.apache.solr.util.TestHarness.query(TestHarness.java:337)
at org.apache.solr.util.TestHarness.query(TestHarness.java:319)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:982)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:951)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.testMaximumResultsForSuggest(SpellCheckComponentTest.java:83)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)

[JENKINS] Lucene-Solr-repro - Build # 632 - Still Unstable

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/632/

[...truncated 63 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2525/consoleText

[repro] Revision: a0acc63d020fbe3f50980820c5aba6601785eb68

[repro] Repro line:  ant test  -Dtestcase=BasicDistributedZk2Test 
-Dtests.method=test -Dtests.seed=1F26F6B62AF44BAD -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=America/Indiana/Knox 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=1F26F6B62AF44BAD -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=nl-NL -Dtests.timezone=Brazil/West 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testDeleteNode -Dtests.seed=1F26F6B62AF44BAD 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mk 
-Dtests.timezone=Europe/Kirov -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
f959777995c1cfb7ae839c1729cfce62403e1586
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout a0acc63d020fbe3f50980820c5aba6601785eb68

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   BasicDistributedZk2Test
[repro]   SearchRateTriggerIntegrationTest
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.BasicDistributedZk2Test|*.SearchRateTriggerIntegrationTest|*.IndexSizeTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=1F26F6B62AF44BAD -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=America/Indiana/Knox 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 7740 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.BasicDistributedZk2Test
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout f959777995c1cfb7ae839c1729cfce62403e1586

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12202) failed to run solr-exporter.cmd on Windows platform

2018-05-14 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12202:

Fix Version/s: 7.3.1

> failed to run solr-exporter.cmd on Windows platform
> ---
>
> Key: SOLR-12202
> URL: https://issues.apache.org/jira/browse/SOLR-12202
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Major
> Fix For: 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12202.patch, SOLR-12202_branch_7_3.patch
>
>
> failed to run solr-exporter.cmd on Windows platform due to following:
> - incorrect main class name.
> - incorrect classpath specification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12256) Aliases and eventual consistency (should use sync())

2018-05-14 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-12256.
-
Resolution: Fixed

> Aliases and eventual consistency (should use sync())
> 
>
> Key: SOLR-12256
> URL: https://issues.apache.org/jira/browse/SOLR-12256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.3.1
>
> Attachments: SOLR-12256.patch
>
>
> ZkStateReader.AliasesManager.update() reads alias info from ZK into the 
> ZkStateReader.  This method is called in ~5 places (+2 for tests).  In at 
> least some of these places, the caller assumes that the alias info is 
> subsequently up to date when in fact this might not be so since ZK is allowed 
> to return a stale value.  ZooKeeper.sync() can be called to force an up to 
> date value.  As with sync(), AliasManager.update() ought not to be called 
> aggressively/commonly, only in certain circumstances (e.g. _after_ failing to 
> resolve stuff that would otherwise return an error).
> And related to this eventual consistency issue, SetAliasPropCmd will throw an 
> exception if the alias doesn't exist.  Fair enough, but sometimes (as seen in 
> some tests), the node receiving the command to update Alias properties is 
> simply "behind"; it does not yet know about an alias that other nodes know 
> about.  I believe this is the cause of some failures in AliasIntegrationTest; 
> perhaps others.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12271) Analytics Component reads negative float and double field values incorrectly

2018-05-14 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12271:

Fix Version/s: (was: 7.3.1)

> Analytics Component reads negative float and double field values incorrectly
> 
>
> Key: SOLR-12271
> URL: https://issues.apache.org/jira/browse/SOLR-12271
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0), 7.3.1
>Reporter: Houston Putman
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the analytics component uses the incorrect way of converting 
> numeric doc values longs to doubles and floats.
> The fix is easy and the tests now cover this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.3-Linux (64bit/jdk-11-ea+5) - Build # 227 - Still Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.3-Linux/227/
Java: 64bit/jdk-11-ea+5 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:37767//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:37767//collection1
at 
__randomizedtesting.SeedInfo.seed([69EDFBAD7873C931:E1B9C477D68FA4C9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:895)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:858)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:873)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:542)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1034)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[ANNOUNCE] Apache Solr 7.3.1 released

2018-05-14 Thread Cao Mạnh Đạt
15 May 2018, Apache Solr™ 7.3.1 available

The Lucene PMC is pleased to announce the release of Apache Solr 7.3.1

Solr is the popular, blazing fast, open source NoSQL search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting, faceted search and analytics, rich document
parsing, geospatial search, extensive REST APIs as well as parallel SQL.
Solr is enterprise grade, secure and highly scalable, providing fault
tolerant distributed search and indexing, and powers the search and
navigation features of many of the world's largest internet sites.

This release includes 9 bug fixes since the 7.3.0 release. Some of the
major fixes are:

* Deleting replicas sometimes fails and causes the replicas to exist in the
down state
* Upgrade commons-fileupload dependency to 1.3.3 to address CVE-2016-131
* Do not allow to use absolute URIs for including other files in
solrconfig.xml and schema parsing
* A successful restore collection should mark the shard state as active and
not buffering

Furthermore, this release includes Apache Lucene 7.3.1 which includes 1 bug
fixes since the 7.3.0 release.

The release is available for immediate download at:

http://www.apache.org/dyn/closer.lua/lucene/solr/7.3.1

Please read CHANGES.txt for a detailed list of changes:

https://lucene.apache.org/solr/7_3_1/changes/Changes.html

Please report any feedback to the mailing lists (
http://lucene.apache.org/solr/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring network
for distributing releases. It is possible that the mirror you are using may
not have replicated the release yet. If that is the case, please try
another mirror. This also goes for Maven access.


[ANNOUNCE] Apache Lucene 7.3.1 released

2018-05-14 Thread Cao Mạnh Đạt
15 May 2018, Apache Lucene™ 7.3.1 available

The Lucene PMC is pleased to announce the release of Apache Lucene 7.3.1.

Apache Lucene is a high-performance, full-featured text search engine
library written entirely in Java. It is a technology suitable for nearly
any application that requires full-text search, especially cross-platform.

This release contains one bug fix. The release is available for immediate
download at:
http://lucene.apache.org/core/mirrors-core-latest-redir.html

Further details of changes are available in the change log available at:
http://lucene.apache.org/core/7_3_1/changes/Changes.html

Please report any feedback to the mailing lists (
http://lucene.apache.org/core/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring network
for distributing releases. It is possible that the mirror you are using may
not have replicated the release yet. If that is the case, please try
another mirror. This also applies to Maven access.


[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 621 - Still Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/621/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

19 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/53)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10004_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10005_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1526356143990711400", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10004_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10005_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1526356143991323850",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10005_solr",   
"base_url":"http://127.0.0.1:10005/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10004_solr",   
"base_url":"http://127.0.0.1:10004/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{  
 "parent":"shard1",   "stateTimestamp":"1526356143991171850",   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node7":{   "leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10004_solr",   
"base_url":"http://127.0.0.1:10004/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10005_solr",   
"base_url":"http://127.0.0.1:10005/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/53)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testSplitIntegration_collection_shard2_replica_n3",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":11,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":1,
  "node_name":"127.0.0.1:10004_solr",
  

[jira] [Comment Edited] (LUCENE-8273) Add a ConditionalTokenFilter

2018-05-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475166#comment-16475166
 ] 

Steve Rowe edited comment on LUCENE-8273 at 5/15/18 1:50 AM:
-

I stumbled on what looks like a {{ProtectedTermFilter}} bug when a wrapped 
filter is a filtering token filter, and the content to be analyzed contains at 
least one non-protected term prior to a protected term; in this case protection 
fails:

{code:java|title=TestProtectedTermFilter.java}
  public void testWrappedFilteringTokenFilter() throws IOException {
CharArraySet protectedTerms = new CharArraySet(5, true);
protectedTerms.add("foobar");
TokenStream stream = whitespaceMockTokenizer("foobar abc");
stream = new ProtectedTermFilter(protectedTerms, stream, in -> new 
LengthFilter(in, 1, 4));
assertTokenStreamContents(stream, new String[]{ "foobar", "abc" }); // 
succeeds

stream = whitespaceMockTokenizer("wuthering foobar abc");
stream = new ProtectedTermFilter(protectedTerms, stream, in -> new 
LengthFilter(in, 1, 4));
assertTokenStreamContents(stream, new String[]{ "foobar", "abc" }); // 
fails @ term 0: Actual: abc
  }
{code}

I haven't yet figured out what the problem is.  Alan, do you understand what's 
happening here?


was (Author: steve_rowe):
I stumbled on what looks like a {{ProtectedTermFilter}} bug when a wrapped 
filter is a filtering token filter, and the content to be analyzed contains at 
least one non-protected term prior to a protected term; in this case protection 
fails:

{code:java|title=TestProtectedTerm.java}
  public void testWrappedFilteringTokenFilter() throws IOException {
CharArraySet protectedTerms = new CharArraySet(5, true);
protectedTerms.add("foobar");
TokenStream stream = whitespaceMockTokenizer("foobar abc");
stream = new ProtectedTermFilter(protectedTerms, stream, in -> new 
LengthFilter(in, 1, 4));
assertTokenStreamContents(stream, new String[]{ "foobar", "abc" }); // 
succeeds

stream = whitespaceMockTokenizer("wuthering foobar abc");
stream = new ProtectedTermFilter(protectedTerms, stream, in -> new 
LengthFilter(in, 1, 4));
assertTokenStreamContents(stream, new String[]{ "foobar", "abc" }); // 
fails @ term 0: Actual: abc
  }
{code}

I haven't yet figured out what the problem is.  Alan, do you understand what's 
happening here?

> Add a ConditionalTokenFilter
> 
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8273-part2.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a ConditionalTokenFilter

2018-05-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475166#comment-16475166
 ] 

Steve Rowe commented on LUCENE-8273:


I stumbled on what looks like a {{ProtectedTermFilter}} bug when a wrapped 
filter is a filtering token filter, and the content to be analyzed contains at 
least one non-protected term prior to a protected term; in this case protection 
fails:

{code:java|title=TestProtectedTerm.java}
  public void testWrappedFilteringTokenFilter() throws IOException {
CharArraySet protectedTerms = new CharArraySet(5, true);
protectedTerms.add("foobar");
TokenStream stream = whitespaceMockTokenizer("foobar abc");
stream = new ProtectedTermFilter(protectedTerms, stream, in -> new 
LengthFilter(in, 1, 4));
assertTokenStreamContents(stream, new String[]{ "foobar", "abc" }); // 
succeeds

stream = whitespaceMockTokenizer("wuthering foobar abc");
stream = new ProtectedTermFilter(protectedTerms, stream, in -> new 
LengthFilter(in, 1, 4));
assertTokenStreamContents(stream, new String[]{ "foobar", "abc" }); // 
fails @ term 0: Actual: abc
  }
{code}

I haven't yet figured out what the problem is.  Alan, do you understand what's 
happening here?

> Add a ConditionalTokenFilter
> 
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8273-part2.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 631 - Unstable

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/631/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/55/consoleText

[repro] Revision: a0acc63d020fbe3f50980820c5aba6601785eb68

[repro] Repro line:  ant test  -Dtestcase=CleanupOldIndexTest 
-Dtests.method=test -Dtests.seed=8B249D3BDA10B92E -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=zh 
-Dtests.timezone=America/Juneau -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CleanupOldIndexTest 
-Dtests.seed=8B249D3BDA10B92E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=zh -Dtests.timezone=America/Juneau 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestGenericDistributedQueue 
-Dtests.method=testDistributedQueue -Dtests.seed=8B249D3BDA10B92E 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=be 
-Dtests.timezone=Antarctica/Troll -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestGenericDistributedQueue 
-Dtests.seed=8B249D3BDA10B92E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=be -Dtests.timezone=Antarctica/Troll 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=8B249D3BDA10B92E 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=is-IS -Dtests.timezone=America/Indiana/Vincennes 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testTriggerThrottling -Dtests.seed=8B249D3BDA10B92E 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=is-IS -Dtests.timezone=America/Indiana/Vincennes 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
02849fb707626cf4312f59324fd894be117787c1
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout a0acc63d020fbe3f50980820c5aba6601785eb68

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestGenericDistributedQueue
[repro]   CleanupOldIndexTest
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestGenericDistributedQueue|*.CleanupOldIndexTest|*.TestTriggerIntegration"
 -Dtests.showOutput=onerror  -Dtests.seed=8B249D3BDA10B92E -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=be 
-Dtests.timezone=Antarctica/Troll -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 2058 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.CleanupOldIndexTest
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro] git checkout 02849fb707626cf4312f59324fd894be117787c1

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1543 - Failure

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1543/

All tests passed

Build Log:
[...truncated 13332 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/temp/junit4-J0-20180514_225601_6698428383225238040667.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/heapdumps/java_pid13151.hprof
 ...
   [junit4] Heap dump file created [629649687 bytes in 14.488 secs]
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/temp/junit4-J0-20180514_225601_6695398900644130964031.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] <<< JVM J0: EOF 

[...truncated 2219 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/usr/local/asfpackages/java/jdk1.8.0_172/jre/bin/java 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/heapdumps
 -ea -esa -Dtests.prefix=tests -Dtests.seed=3616AC644BEBB39A -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/tools/junit4/logging.properties
 -Dtests.nightly=true -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/temp
 
-Dcommon.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene
 
-Dclover.db.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/clover/db
 
-Djava.security.policy=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout
 -Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.badapples=false 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=US-ASCII -classpath 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1865 - Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1865/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

11 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([61706602AA84B5E:55AE44D0C8B9DEA4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 

[jira] [Commented] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475093#comment-16475093
 ] 

Yonik Seeley commented on SOLR-12352:
-

bq. Solr's functions operate on floats - maybe this problem follows from that?

Yep.  IIRC there's a JIRA for adding types to value sources / functions (it's 
needed in quite a few places).

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12355) HashJoinStream's use of String::hashCode results in non-matching tuples being considered matches

2018-05-14 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475060#comment-16475060
 ] 

Dennis Gove commented on SOLR-12355:


This also impacts OuterHashJoinStream.

> HashJoinStream's use of String::hashCode results in non-matching tuples being 
> considered matches
> 
>
> Key: SOLR-12355
> URL: https://issues.apache.org/jira/browse/SOLR-12355
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Major
>
> The following strings have been found to have hashCode conflicts and as such 
> can result in HashJoinStream considering two tuples with fields of these 
> values to be considered the same.
> {code:java}
> "MG!!00TNGP::Mtge::".hashCode() == "MG!!00TNH1::Mtge::".hashCode() {code}
> This means these two tuples are the same if we're comparing on field "foo"
> {code:java}
> {
>   "foo":"MG!!00TNGP::Mtge::"
> }
> {
>   "foo":"MG!!00TNH1::Mtge::"
> }
> {code}
> and these two tuples are the same if we're comparing on fields "foo,bar"
> {code:java}
> {
>   "foo":"MG!!00TNGP"
>   "bar":"Mtge"
> }
> {
>   "foo":"MG!!00TNH1"
>   "bar":"Mtge"
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12355) HashJoinStream's use of String::hashCode results in non-matching tuples being considered matches

2018-05-14 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475055#comment-16475055
 ] 

Dennis Gove commented on SOLR-12355:


I have a fix for this where instead of calculating the string value's hashCode 
we just use the string value as the key in the hashed set of tuples. I'm 
creating a few test cases to verify this gives us what we want.

> HashJoinStream's use of String::hashCode results in non-matching tuples being 
> considered matches
> 
>
> Key: SOLR-12355
> URL: https://issues.apache.org/jira/browse/SOLR-12355
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Major
>
> The following strings have been found to have hashCode conflicts and as such 
> can result in HashJoinStream considering two tuples with fields of these 
> values to be considered the same.
> {code:java}
> "MG!!00TNGP::Mtge::".hashCode() == "MG!!00TNH1::Mtge::".hashCode() {code}
> This means these two tuples are the same if we're comparing on field "foo"
> {code:java}
> {
>   "foo":"MG!!00TNGP::Mtge::"
> }
> {
>   "foo":"MG!!00TNH1::Mtge::"
> }
> {code}
> and these two tuples are the same if we're comparing on fields "foo,bar"
> {code:java}
> {
>   "foo":"MG!!00TNGP"
>   "bar":"Mtge"
> }
> {
>   "foo":"MG!!00TNH1"
>   "bar":"Mtge"
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12355) HashJoinStream's use of String::hashCode results in non-matching tuples being considered matches

2018-05-14 Thread Dennis Gove (JIRA)
Dennis Gove created SOLR-12355:
--

 Summary: HashJoinStream's use of String::hashCode results in 
non-matching tuples being considered matches
 Key: SOLR-12355
 URL: https://issues.apache.org/jira/browse/SOLR-12355
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 6.0
Reporter: Dennis Gove
Assignee: Dennis Gove


The following strings have been found to have hashCode conflicts and as such 
can result in HashJoinStream considering two tuples with fields of these values 
to be considered the same.


{code:java}
"MG!!00TNGP::Mtge::".hashCode() == "MG!!00TNH1::Mtge::".hashCode() {code}
This means these two tuples are the same if we're comparing on field "foo"
{code:java}
{
  "foo":"MG!!00TNGP::Mtge::"
}
{
  "foo":"MG!!00TNH1::Mtge::"
}
{code}
and these two tuples are the same if we're comparing on fields "foo,bar"
{code:java}
{
  "foo":"MG!!00TNGP"
  "bar":"Mtge"
}
{
  "foo":"MG!!00TNH1"
  "bar":"Mtge"
}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475025#comment-16475025
 ] 

Hoss Man commented on SOLR-9480:


Updated patch...

This includes cleanup of some test & javadoc nocommits, but the biggest change 
is renaming {{skg(...)}} to {{relatedness(...)}} -- that's the best name I 
could come up with.

It occured to me I never really posted a full example of what generating an SKG 
looks like with this approach of implementing relatedness as an Aggregate 
function, so here's a complete request/response example using stackexchange 
"scifi" data...

{noformat}
curl -sS -X POST http://localhost:8983/solr/scifi/query -d 
'rows=0=type:QUESTION=body:%22harry+potter%22=*:*={
  tags : {
type : terms,
field : tags,
limit : 5,
sort : { skg: desc },
facet : {
  skg : "relatedness($fore,$back)",
  body : {
type : terms,
field : body,
limit : 5,
  sort : { skg: desc },
  facet : {
skg : "relatedness($fore,$back)"
  }
  }
}
  }
}'
{noformat}


{noformat}
{
  "responseHeader":{
"status":0,
"QTime":4402,
"params":{
  "q":"type:QUESTION",
  "json.facet":"{\n  tags : {\ntype : terms,\nfield : tags,\n
limit : 5,\nsort : { skg: desc },\nfacet : {\n  skg : 
\"relatedness($fore,$back)\",\n  body : {\ntype : terms,\n  field : 
body,\nlimit : 5,\nsort : { skg: desc },\nfacet : {\n  skg 
: \"relatedness($fore,$back)\"\n}\n  }\n}\n  }\n}",
  "back":"*:*",
  "rows":"0",
  "fore":"body:\"harry potter\""}},
  "response":{"numFound":46598,"start":0,"docs":[]
  },
  "facets":{
"count":46598,
"tags":{
  "buckets":[{
  "val":"harry-potter",
  "count":5141,
  "skg":{
"relatedness":0.70795,
"foreground_popularity":0.01113,
"background_popularity":0.03627},
  "body":{
"buckets":[{
"val":"potter",
"count":1715,
"skg":{
  "relatedness":0.83699,
  "foreground_popularity":0.01113,
  "background_popularity":0.03555}},
  {
"val":"harry",
"count":2944,
"skg":{
  "relatedness":0.76488,
  "foreground_popularity":0.01113,
  "background_popularity":0.07392}},
  {
"val":"deathly",
"count":516,
"skg":{
  "relatedness":0.41314,
  "foreground_popularity":0.0017,
  "background_popularity":0.01308}},
  {
"val":"hallows",
"count":525,
"skg":{
  "relatedness":0.4125,
  "foreground_popularity":0.00171,
  "background_popularity":0.01333}},
  {
"val":"hogwarts",
"count":1061,
"skg":{
  "relatedness":0.39054,
  "foreground_popularity":0.00229,
  "background_popularity":0.02585}}]}},
{
  "val":"jk-rowling",
  "count":107,
  "skg":{
"relatedness":0.23501,
"foreground_popularity":3.7E-4,
"background_popularity":7.5E-4},
  "body":{
"buckets":[{
"val":"attender",
"count":1,
"skg":{
  "relatedness":0.4322,
  "foreground_popularity":1.0E-5,
  "background_popularity":1.0E-5}},
  {
"val":"escapers",
"count":1,
"skg":{
  "relatedness":0.4322,
  "foreground_popularity":1.0E-5,
  "background_popularity":1.0E-5}},
  {
"val":"l'etat",
"count":1,
"skg":{
  "relatedness":0.4322,
  "foreground_popularity":1.0E-5,
  "background_popularity":1.0E-5}},
  {
"val":"mugglenet's",
"count":1,
"skg":{
  "relatedness":0.4322,
  "foreground_popularity":1.0E-5,
  "background_popularity":1.0E-5}},
  {
"val":"pocketeded",
"count":1,
"skg":{
  "relatedness":0.4322,
  "foreground_popularity":1.0E-5,
  "background_popularity":1.0E-5}}]}},
{
  "val":"the-cursed-child",
  "count":60,
  "skg":{
"relatedness":0.23294,
"foreground_popularity":2.7E-4,
"background_popularity":4.2E-4},
  "body":{

[jira] [Updated] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-14 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9480:
---
Attachment: SOLR-9480.patch

> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12354) org.apache.solr.security.PKIAuthenticationPlugin does not check response code when retrieving remotePublicKey

2018-05-14 Thread hamada (JIRA)
hamada created SOLR-12354:
-

 Summary: org.apache.solr.security.PKIAuthenticationPlugin does not 
check response code when retrieving remotePublicKey
 Key: SOLR-12354
 URL: https://issues.apache.org/jira/browse/SOLR-12354
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Authentication
Affects Versions: 6.6.3, 6.6.2
Reporter: hamada


in decipherHeader(), if keyCache does not contain the key of interest, then a 
remote call is made to retrieve the key from the remote host, by calling 
getRemotePublicKey, which fails if the server returns an html error page.

e.g.:

org.noggit.JSONParser$ParseException: JSON Parse Error: char=<,position=0 
BEFORE='<' AFTER='html>  

BadApple-ing tests

2018-05-14 Thread Erick Erickson
There won't be any for a couple of weeks as I'm on vacation.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 640 - Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/640/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection

Error Message:
Timeout waiting for new leader null Live Nodes: [127.0.0.1:56821_solr, 
127.0.0.1:56831_solr, 127.0.0.1:56839_solr] Last available state: 
DocCollection(collection1//collections/collection1/state.json/13)={   
"pullReplicas":"0",   "replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node62":{   "core":"collection1_shard1_replica_n61",   
"base_url":"http://127.0.0.1:56815/solr;,   
"node_name":"127.0.0.1:56815_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node64":{ 
  "core":"collection1_shard1_replica_n63",   
"base_url":"http://127.0.0.1:56821/solr;,   
"node_name":"127.0.0.1:56821_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node66":{ 
  "core":"collection1_shard1_replica_n65",   
"base_url":"http://127.0.0.1:56831/solr;,   
"node_name":"127.0.0.1:56831_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for new leader
null
Live Nodes: [127.0.0.1:56821_solr, 127.0.0.1:56831_solr, 127.0.0.1:56839_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/13)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node62":{
  "core":"collection1_shard1_replica_n61",
  "base_url":"http://127.0.0.1:56815/solr;,
  "node_name":"127.0.0.1:56815_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node64":{
  "core":"collection1_shard1_replica_n63",
  "base_url":"http://127.0.0.1:56821/solr;,
  "node_name":"127.0.0.1:56821_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node66":{
  "core":"collection1_shard1_replica_n65",
  "base_url":"http://127.0.0.1:56831/solr;,
  "node_name":"127.0.0.1:56831_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([86D8570A7971909:A07199CA65D72D23]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:278)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection(LeaderVoteWaitTimeoutTest.java:187)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474989#comment-16474989
 ] 

Shawn Heisey commented on SOLR-12352:
-

Is the mod function actually useful for floating point numbers?  I did a simple 
test in Java code which shows that it *sort of* works, but suffers from 
precision problems:

{code:java}
  double dv = 5.7;
  double dms = 2.2;
  System.out.println(dv % dms); // 1.2998
{code}

I wouldn't have imagined any use for mod with anything other than whole 
numbers, but maybe that's a failure of imagination on my part.


> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.3-Linux (64bit/jdk-10) - Build # 226 - Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.3-Linux/226/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:35155/p_/c/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:35155/p_/c/collection1
at 
__randomizedtesting.SeedInfo.seed([FACB96690FBF513C:729FA9B3A1433CC4]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:895)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:858)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:873)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:542)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1034)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-12340) Solr 7 does not do a phrase search by default for certain queries.

2018-05-14 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474975#comment-16474975
 ] 

Yonik Seeley commented on SOLR-12340:
-

Yes, this looks like an unfortunate change of behavior.  I looked into it 
myself some and found the culprit (and workaround) that you did:
passing sow=true will restore the previous behavior.

The background issues are:
https://issues.apache.org/jira/browse/LUCENE-7799
https://issues.apache.org/jira/browse/LUCENE-7533

Basically, sow=false (the new default) is implemented in such a way that 
autoGeneratePhraseQueries can't work :-(



> Solr 7 does not do a phrase search by default for certain queries.
> --
>
> Key: SOLR-12340
> URL: https://issues.apache.org/jira/browse/SOLR-12340
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.2
> Environment: windows 7 x64 
> solr-spec 5.2.1
> lucene-spec 5.2.1
> java.runtime.version 1.8.0_112-b15
> jetty.version 9.3.8.v20160314
> solr-spec 7.2.1
> lucene-impl 7.2.1
> java.version 9.0.4
> jetty.version 9.3.8.v20160314
>Reporter: piyush nayak
>Priority: Major
> Fix For: 7.2.1
>
> Attachments: managed-schema-solr7, schema-solr5.xml
>
>
> we have recently upgraded from Solr5 to Solr7. I'm running into a change of 
> behavior detailed below:
> For the term "test3" Solr7 splits the numeric and alphabetical components and 
> does a simple term search while Solr 5 did a phrase search.
> ---
> lucene/solr-spec: 7.2.1
> [http://localhost:8991/solr/solr4/select?q=test3=test=json=true=true]
>  
> "debug":{
>     "rawquerystring":"test3",
>     "querystring":"test3",
>     "parsedquery":"contents:test contents:3",
>     "parsedquery_toString":"contents:test contents:3",
>  
> ---
> lucene/solr-spec 5.2.1
> [http://localhost:8989/solr/solr4/select?q=test3=test=json=true=true]
>  
> "debug":{
>     "rawquerystring":"test3",
>     "querystring":"test3",
>     "parsedquery":"PhraseQuery(contents:\"test 3\")",
>     "parsedquery_toString":"contents:\"test 3\"",
> 
> passing "sow=true" in the URL for Solr 7 makes it behave like 5.
> The schema.xml in both Solr versions for me is the one that gets copied from 
> the default template folder to the collections's conf folder.
> The fieldtype that corresponds to field "contents" is "text", and the 
> definition of "text" field in 5 and the schema backup on 7 is the same.
>  
> I tried the analysis tab. Looks like all the classes (WT, SF ...) in 7 list a 
> property (termFrequency = 1) that is missing in 5.
> attaching the schema for Solr 5 and 7.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12353) SolrDispatchFilter expensive non-conditional debug line degrades performance

2018-05-14 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-12353:
-

Assignee: Erick Erickson

> SolrDispatchFilter expensive non-conditional debug line degrades performance
> 
>
> Key: SOLR-12353
> URL: https://issues.apache.org/jira/browse/SOLR-12353
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Authentication, logging
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Assignee: Erick Erickson
>Priority: Major
>
> Hello,
> We use Solr 6.6.3. Recently on one network when switching on authentication 
> (security.json) began experiencing significant delays (5-10 seconds) to 
> fulfill each request to /solr index.
> I debugged the issue and it was essentially triggered by line 456 of 
> SolrDispatchFilter.java:
> {code:java}
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> {code}
> The issue is that on machines and networks with poor configuration or DNS 
> issues in particular, request.getLocalName() can trigger expensive reverse 
> DNS queries for the ethernet interfaces, and will only return within 
> reasonable timeframe if manually written into /etc/hosts.
> More to the point, request.getLocalName() should be considered an expensive 
> operation in general, and in SolrDispatchFilter it runs unconditionally even 
> if debug is disabled.
> I would suggest to either replace request.getLocalName/Port here, or at the 
> least, wrap the debug operation so it doesn't affect any production systems:
> {code:java}
> if (log.isDebugEnabled()) {
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> }
> {code}
> The authenticateRequest method in question is private so we could not 
> override it and making another HttpServletRequestWrapper to circumvent the 
> servlet API was doubtful.
> Thank you
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-05-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474832#comment-16474832
 ] 

David Smiley commented on SOLR-12338:
-

I looked at this again (after a few days of vacation) and I withdraw my concern 
that there's a bug.  The use of ArrayBlockingQueue(1) is acting as a sort of 
Lock in the same way I suggested to use a Lock.  Couldn't you simply replace it 
with a Lock?  The put() becomes a lock(), and the poll() becomes an unlock(); 
see what I mean?.  I think this is clearer since it's a simpler mechanism than 
an ArrayBlockingQueue, and the use of ABQ in this specific way (size 1) could 
lend itself to misuse later if someone thinks increasing its size or type gains 
us parallelism.  And I don't think the fairness setting matters here.  And 
although you initialized the size of this array of ABQ to be the number of 
threads, I think we ought to use a larger array to prevent collisions (prevent 
needlessly blocking on different docIDs that hash to the same thread).

I also was thinking of a way to have more "on-deck" runnables for a given 
docID, waiting in-line.  The Runnable we submit to the delegate could be some 
inner class OrderedRunnable that has a "next" pointer to the next 
OrderedRunnable.  We could maintain a parallel array of the top OrderedRunnable 
(parallel to an array of Locks).  Manipulating the OrderedRunnable chain 
requires holding the lock.  To ensure we bound these things waiting in-line, we 
could use one Semaphore for the whole OrderedExecutor instance.  There's more 
to it than this.  Of course this adds complexity, but the current approach 
(either ABQ or Lock) can unfortunately block needlessly if the doc ID is locked 
yet soon more/different dock IDs will be submitted next and there are available 
threads.  Perhaps this is overthinking it (over optimization / complexity) as 
this will not be the common case?  This would be even more needless if we 
increase the Lock array to prevent collisions so nevermind I guess.

 
{quote}(RE Submit without ID) This can help us to know how many threads are 
running (pending). Therefore OrderedExecutor does not execute more than 
\{{numThreads }}in parallel. It also solves the case when ExecutorService's 
queue is full it will throw RejectedExecutionException.
{quote}
Isn't this up to how the backing delegate is configured?  If it's using a fixed 
thread pool, then there won't be more threads running.  Likewise for 
RejectedExecutionException.

> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12353) SolrDispatchFilter expensive non-conditional debug line degrades performance

2018-05-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474821#comment-16474821
 ] 

Shawn Heisey commented on SOLR-12353:
-

General advice for logger usage is to NOT wrap with the isXxxxEnabled call 
except in cases where obtaining logging parameters is likely to be slow and the 
level being tested is not enabled by default.

The quick solution, which we could do as a stepping stone, is to wrap this 
logging in isDebugEnabled as you have suggested.

Thinking larger ... I don't see any point to having hostname and port 
information in the debug log at all.  If it is actually useful, then 
getLocalName could be replaced with getLocalAddr.  Such log entries are 
unlikely to be viewed by anybody who doesn't know how the system is configured, 
so having an IP address in the log would not introduce a significant 
administrative burden.  Possible replacements without isDebugEnabled, my 
preferred first:

{code:java}
log.debug("Request to authenticate: {}", request);
{code}

or

{code:java}
log.debug("Request to authenticate: {}, address: {}, port: {}", 
request, request.getLocalAddr(), request.getLocalPort());
{code}

Side note: IMHO the /etc/hosts file should always contain an entry for every 
network interface on the machine.  But the simple fact is that users have 
configurations that aren't perfect, and Solr should work well even if the 
system config is not ideal.  Thanks for bringing the issue to our attention!


> SolrDispatchFilter expensive non-conditional debug line degrades performance
> 
>
> Key: SOLR-12353
> URL: https://issues.apache.org/jira/browse/SOLR-12353
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Authentication, logging
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello,
> We use Solr 6.6.3. Recently on one network when switching on authentication 
> (security.json) began experiencing significant delays (5-10 seconds) to 
> fulfill each request to /solr index.
> I debugged the issue and it was essentially triggered by line 456 of 
> SolrDispatchFilter.java:
> {code:java}
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> {code}
> The issue is that on machines and networks with poor configuration or DNS 
> issues in particular, request.getLocalName() can trigger expensive reverse 
> DNS queries for the ethernet interfaces, and will only return within 
> reasonable timeframe if manually written into /etc/hosts.
> More to the point, request.getLocalName() should be considered an expensive 
> operation in general, and in SolrDispatchFilter it runs unconditionally even 
> if debug is disabled.
> I would suggest to either replace request.getLocalName/Port here, or at the 
> least, wrap the debug operation so it doesn't affect any production systems:
> {code:java}
> if (log.isDebugEnabled()) {
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> }
> {code}
> The authenticateRequest method in question is private so we could not 
> override it and making another HttpServletRequestWrapper to circumvent the 
> servlet API was doubtful.
> Thank you
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4630 - Still Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4630/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection

Error Message:
Timeout waiting for 1x3 collection null Live Nodes: [127.0.0.1:55384_solr, 
127.0.0.1:55390_solr, 127.0.0.1:55396_solr, 127.0.0.1:55402_solr] Last 
available state: 
DocCollection(collection1//collections/collection1/state.json/21)={   
"pullReplicas":"0",   "replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node62":{   "core":"collection1_shard1_replica_n61",   
"base_url":"http://127.0.0.1:55384/solr;,   
"node_name":"127.0.0.1:55384_solr",   "state":"recovering",   
"type":"NRT",   "force_set_state":"false"}, "core_node64":{ 
  "core":"collection1_shard1_replica_n63",   
"base_url":"http://127.0.0.1:55390/solr;,   
"node_name":"127.0.0.1:55390_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node66":{ 
  "core":"collection1_shard1_replica_n65",   
"base_url":"http://127.0.0.1:55396/solr;,   
"node_name":"127.0.0.1:55396_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"3",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for 1x3 collection
null
Live Nodes: [127.0.0.1:55384_solr, 127.0.0.1:55390_solr, 127.0.0.1:55396_solr, 
127.0.0.1:55402_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/21)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node62":{
  "core":"collection1_shard1_replica_n61",
  "base_url":"http://127.0.0.1:55384/solr;,
  "node_name":"127.0.0.1:55384_solr",
  "state":"recovering",
  "type":"NRT",
  "force_set_state":"false"},
"core_node64":{
  "core":"collection1_shard1_replica_n63",
  "base_url":"http://127.0.0.1:55390/solr;,
  "node_name":"127.0.0.1:55390_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node66":{
  "core":"collection1_shard1_replica_n65",
  "base_url":"http://127.0.0.1:55396/solr;,
  "node_name":"127.0.0.1:55396_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([B3451031C8378AF2:1B590C8B0A77BED8]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:278)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection(LeaderVoteWaitTimeoutTest.java:200)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-12353) SolrDispatchFilter expensive non-conditional debug line degrades performance

2018-05-14 Thread Pascal Proulx (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pascal Proulx updated SOLR-12353:
-
Description: 
Hello,

We use Solr 6.6.3. Recently on one network when switching on authentication 
(security.json) began experiencing significant delays (5-10 seconds) to fulfill 
each request to /solr index.

I debugged the issue and it was essentially triggered by line 456 of 
SolrDispatchFilter.java:
{code:java}
log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
request.getLocalName(), request.getLocalPort());
{code}
The issue is that on machines and networks with poor configuration or DNS 
issues in particular, request.getLocalName() can trigger expensive reverse DNS 
queries for the ethernet interfaces, and will only return within reasonable 
timeframe if manually written into /etc/hosts.

More to the point, request.getLocalName() should be considered an expensive 
operation in general, and in SolrDispatchFilter it runs unconditionally even if 
debug is disabled.

I would suggest to either replace request.getLocalName/Port here, or at the 
least, wrap the debug operation so it doesn't affect any production systems:
{code:java}
if (log.isDebugEnabled()) {
log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
request.getLocalName(), request.getLocalPort());
}
{code}
The authenticateRequest method in question is private so we could not override 
it and making another HttpServletRequestWrapper to circumvent the servlet API 
was doubtful.

Thank you

 

  was:
Hello,

We use Solr 6.6.3. Recently on one network when switching on authentication 
(security.json) began experiencing significant delays (5-10 seconds) to fulfill 
each request to /solr index.

I debugged the issue and it was essentially triggered by line 456 of 
SolrDispatchFilter.java:
{code:java}
log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
request.getLocalName(), request.getLocalPort());
{code}
The issue is that on machines and networks with poor configuration or routing 
issues in particular, request.getLocalName() can trigger expensive reverse DNS 
queries for the ethernet interfaces, and will only return within reasonable 
timeframe if manually written into /etc/hosts.

More to the point, request.getLocalName() should be considered an expensive 
operation in general, and in SolrDispatchFilter it runs unconditionally even if 
debug is disabled.

I would suggest to either replace request.getLocalName/Port here, or at the 
least, wrap the debug operation so it doesn't affect any production systems:
{code:java}
if (log.isDebugEnabled()) {
log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
request.getLocalName(), request.getLocalPort());
}
{code}
The authenticateRequest method in question is private so we could not override 
it and making another HttpServletRequestWrapper to circumvent the servlet API 
was doubtful.

Thank you

 


> SolrDispatchFilter expensive non-conditional debug line degrades performance
> 
>
> Key: SOLR-12353
> URL: https://issues.apache.org/jira/browse/SOLR-12353
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Authentication, logging
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello,
> We use Solr 6.6.3. Recently on one network when switching on authentication 
> (security.json) began experiencing significant delays (5-10 seconds) to 
> fulfill each request to /solr index.
> I debugged the issue and it was essentially triggered by line 456 of 
> SolrDispatchFilter.java:
> {code:java}
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> {code}
> The issue is that on machines and networks with poor configuration or DNS 
> issues in particular, request.getLocalName() can trigger expensive reverse 
> DNS queries for the ethernet interfaces, and will only return within 
> reasonable timeframe if manually written into /etc/hosts.
> More to the point, request.getLocalName() should be considered an expensive 
> operation in general, and in SolrDispatchFilter it runs unconditionally even 
> if debug is disabled.
> I would suggest to either replace request.getLocalName/Port here, or at the 
> least, wrap the debug operation so it doesn't affect any production systems:
> {code:java}
> if (log.isDebugEnabled()) {
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> }
> {code}
> The authenticateRequest method in question is private so we could not 
> override it and making another HttpServletRequestWrapper to 

[jira] [Comment Edited] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474798#comment-16474798
 ] 

Andrey Kudryavtsev edited comment on SOLR-12352 at 5/14/18 9:04 PM:


Casting to double is a "better" approach (i.e. it will give correct answers in 
more cases) but in general [it is still 
broken|https://stackoverflow.com/a/17930989] :
{code:java}
 
long version1 = 1600463487761383425L;
long version2 = 1600463487761383424L;
System.out.println((double)version1 == (double) version2); //true
{code}
Probably there should be more complex logic behind that user function 


was (Author: werder):
Casting to double is a "better" approach (i.e. it will give correct answers in 
more cases) but in general [it is still broken|http://example.com/] :
{code:java}
 
long version1 = 1600463487761383425L;
long version2 = 1600463487761383424L;
System.out.println((double)version1 == (double) version2); //true
{code}
Probably there should be more complex logic behind that user function 

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474798#comment-16474798
 ] 

Andrey Kudryavtsev edited comment on SOLR-12352 at 5/14/18 9:03 PM:


Casting to double is a "better" approach (i.e. it will give correct answers in 
more cases) but in general [it is still broken|http://example.com/] :
{code:java}
 
long version1 = 1600463487761383425L;
long version2 = 1600463487761383424L;
System.out.println((double)version1 == (double) version2); //true
{code}
Probably there should be more complex logic behind that user function 


was (Author: werder):
Casting to double is a "better" approach (i.e. it will give correct answers in 
more cases) but in general [it is still broken|http://example.com] : 

{code} 
long version1 = 1600463487761383425L;
long version2 = 1600463487761383424L;
System.out.println((double)version1 == (double) version2); //true
{code}

Probably there should be more complex logic here 

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474798#comment-16474798
 ] 

Andrey Kudryavtsev commented on SOLR-12352:
---

Casting to double is a "better" approach (i.e. it will give correct answers in 
more cases) but in general [it is still broken|http://example.com] : 

{code} 
long version1 = 1600463487761383425L;
long version2 = 1600463487761383424L;
System.out.println((double)version1 == (double) version2); //true
{code}

Probably there should be more complex logic here 

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12353) SolrDispatchFilter expensive non-conditional debug line degrades performance

2018-05-14 Thread Pascal Proulx (JIRA)
Pascal Proulx created SOLR-12353:


 Summary: SolrDispatchFilter expensive non-conditional debug line 
degrades performance
 Key: SOLR-12353
 URL: https://issues.apache.org/jira/browse/SOLR-12353
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI, Authentication, logging
Affects Versions: 6.6.3
Reporter: Pascal Proulx


Hello,

We use Solr 6.6.3. Recently on one network when switching on authentication 
(security.json) began experiencing significant delays (5-10 seconds) to fulfill 
each request to /solr index.

I debugged the issue and it was essentially triggered by line 456 of 
SolrDispatchFilter.java:
{code:java}
log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
request.getLocalName(), request.getLocalPort());
{code}
The issue is that on machines and networks with poor configuration or routing 
issues in particular, request.getLocalName() can trigger expensive reverse DNS 
queries for the ethernet interfaces, and will only return within reasonable 
timeframe if manually written into /etc/hosts.

More to the point, request.getLocalName() should be considered an expensive 
operation in general, and in SolrDispatchFilter it runs unconditionally even if 
debug is disabled.

I would suggest to either replace request.getLocalName/Port here, or at the 
least, wrap the debug operation so it doesn't affect any production systems:
{code:java}
if (log.isDebugEnabled()) {
log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
request.getLocalName(), request.getLocalPort());
}
{code}
The authenticateRequest method in question is private so we could not override 
it and making another HttpServletRequestWrapper to circumvent the servlet API 
was doubtful.

Thank you

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474786#comment-16474786
 ] 

Steve Rowe commented on SOLR-12352:
---

Solr's functions operate on floats - maybe this problem follows from that?

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474781#comment-16474781
 ] 

Shawn Heisey commented on SOLR-12352:
-

If I add a line where the cast is to double, I get an ALMOST correct number.  
It's off by one.

{code:java}
long version = 1600463487761383425L;
long ms = 1526324140364L;
System.out.println(version % ms); // 1204927482853
System.out.println((float)version % (float)ms); // 1.28043752E12 
i.e. 128043752
System.out.println((double)version % (double)ms); // 
1.204927482852E12
{code}


> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474771#comment-16474771
 ] 

Andrey Kudryavtsev commented on SOLR-12352:
---

[Longs are casted to 
floats|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/ValueSourceParser.java#L232]
 in "mod" function, thats the problem: 
{code:java}
long version = 1600463487761383425L;
long ms = 1526324140364L;
System.out.println(version % ms); // 1204927482853
System.out.println((float)version % (float)ms); // 1.28043752E12 i.e. 
128043752
{code}

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474752#comment-16474752
 ] 

Shawn Heisey edited comment on SOLR-12352 at 5/14/18 8:31 PM:
--

Problem confirmed on 6.6.2-SNAPSHOT and 7.3.0.  Added the following parameter 
to a query:

{noformat}
fl=*,foo:mod(1600463487761383425,1526324140364)
{noformat}

That added the following to each document in the result (wt=json):

{noformat}
"foo":1.28043752E12
{noformat}

I found it irritating that I got exponential notation (and the associated loss 
of precision) instead of a true number, but as the issue indicates, the value 
is completely wrong.  The correct value of 1204927482853 was confirmed both in 
a test Java program and with the scientific calculator built into Windows 7.


was (Author: elyograg):
Problem confirmed on 6.6.2-SNAPSHOT and 7.3.0.  Added the following parameter 
to a query:

{noformat}
fl=*,foo:mod(1600463487761383425,1526324140364)
{noformat}

That added the following to each document in the result (wt=json:

{noformat}
"foo":1.28043752E12
{noformat}

I found it irritating that I got exponential notation (and the associated loss 
of precision) instead of a true number, but as the issue indicates, the value 
is completely wrong.  The correct value of 1204927482853 was confirmed both in 
a test Java program and with the scientific calculator built into Windows 7.

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12352:

Description: 
It seems mod operation in function query is not working correctly for large 
numbers.

{noformat}
"_version_": 1600463487761383425,
"ms(NOW)": 1526324140364,
"mod(_version_,ms(NOW))": 128043752
{noformat}
 
However, mod(1600463487761383425,1526324140364) is 1204927482853.
 

  was:
It seems mod operation in function query is not working correctly for large 
numbers.
"_version_": 1600463487761383425,
"ms(NOW)": 1526324140364,
"mod(_version_,ms(NOW))": 128043752
 
However, mod(1600463487761383425,1526324140364) is 1204927482853.
 


> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> {noformat}
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
> {noformat}
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12352:

Affects Version/s: 7.3

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474752#comment-16474752
 ] 

Shawn Heisey commented on SOLR-12352:
-

Problem confirmed on 6.6.2-SNAPSHOT and 7.3.0.  Added the following parameter 
to a query:

{noformat}
fl=*,foo:mod(1600463487761383425,1526324140364)
{noformat}

That added the following to each document in the result (wt=json:

{noformat}
"foo":1.28043752E12
{noformat}

I found it irritating that I got exponential notation (and the associated loss 
of precision) instead of a true number, but as the issue indicates, the value 
is completely wrong.  The correct value of 1204927482853 was confirmed both in 
a test Java program and with the scientific calculator built into Windows 7.

> Solr mod function query does not yield correct results
> --
>
> Key: SOLR-12352
> URL: https://issues.apache.org/jira/browse/SOLR-12352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: gnandre
>Priority: Minor
>
> It seems mod operation in function query is not working correctly for large 
> numbers.
> "_version_": 1600463487761383425,
> "ms(NOW)": 1526324140364,
> "mod(_version_,ms(NOW))": 128043752
>  
> However, mod(1600463487761383425,1526324140364) is 1204927482853.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-14 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474749#comment-16474749
 ] 

Andrzej Bialecki  commented on SOLR-11779:
--

Patch that implements the collection of metrics in RRD4j databases, which are 
then stored in {{.system}} collection. This data can then be retrieved from new 
{{/admin/metrics/history}} endpoint, either as numeric data or as PNG graphs.

(Note: in order to test this you need to also manually create the {{.system}} 
collection, which is not created by default).

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-14 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11779:
-
Attachment: SOLR-11779.patch

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12352) Solr mod function query does not yield correct results

2018-05-14 Thread Gitesh (JIRA)
Gitesh created SOLR-12352:
-

 Summary: Solr mod function query does not yield correct results
 Key: SOLR-12352
 URL: https://issues.apache.org/jira/browse/SOLR-12352
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Gitesh


It seems mod operation in function query is not working correctly for large 
numbers.
"_version_": 1600463487761383425,
"ms(NOW)": 1526324140364,
"mod(_version_,ms(NOW))": 128043752
 
However, mod(1600463487761383425,1526324140364) is 1204927482853.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 22009 - Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22009/
Java: 32bit/jdk1.8.0_162 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.search.TestStressVersions.testStressGetRealtimeVersions

Error Message:
Captured an uncaught exception in thread: Thread[id=9153, name=WRITER7, 
state=RUNNABLE, group=TGRP-TestStressVersions]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9153, name=WRITER7, state=RUNNABLE, 
group=TGRP-TestStressVersions]
at 
__randomizedtesting.SeedInfo.seed([541FB31E5FCBB6E2:511BA22112B7B2BE]:0)
Caused by: java.lang.RuntimeException: java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([541FB31E5FCBB6E2]:0)
at 
org.apache.solr.search.TestStressVersions$1.run(TestStressVersions.java:199)
Caused by: java.lang.NullPointerException
at 
org.apache.solr.update.UpdateLog.getCurrentLogSizeFromStream(UpdateLog.java:299)
at 
org.apache.solr.update.DirectUpdateHandler2.getCurrentTLogSize(DirectUpdateHandler2.java:1007)
at 
org.apache.solr.update.DirectUpdateHandler2.updateDeleteTrackers(DirectUpdateHandler2.java:432)
at 
org.apache.solr.update.DirectUpdateHandler2.delete(DirectUpdateHandler2.java:465)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:75)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:59)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalDelete(DistributedUpdateProcessor.java:956)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionDelete(DistributedUpdateProcessor.java:1844)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteById(DistributedUpdateProcessor.java:1381)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:1359)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:124)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleDeleteMap(JsonLoader.java:394)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleDeleteCommand(JsonLoader.java:311)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:171)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:121)
at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:84)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2510)
at 
org.apache.solr.servlet.DirectSolrConnection.request(DirectSolrConnection.java:125)
at org.apache.solr.SolrTestCaseJ4.updateJ(SolrTestCaseJ4.java:1286)
at 
org.apache.solr.SolrTestCaseJ4.deleteAndGetVersion(SolrTestCaseJ4.java:1464)
at 
org.apache.solr.search.TestStressVersions$1.run(TestStressVersions.java:144)




Build Log:
[...truncated 13649 lines...]
   [junit4] Suite: org.apache.solr.search.TestStressVersions
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.search.TestStressVersions_541FB31E5FCBB6E2-001/init-core-data-001
   [junit4]   2> 1165240 INFO  
(SUITE-TestStressVersions-seed#[541FB31E5FCBB6E2]-worker) [] 
o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/lib,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/lib/classes]
   [junit4]   2> 1165269 INFO  
(SUITE-TestStressVersions-seed#[541FB31E5FCBB6E2]-worker) [] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 1165283 INFO  
(SUITE-TestStressVersions-seed#[541FB31E5FCBB6E2]-worker) [] 
o.a.s.s.IndexSchema [null] Schema name=test
   [junit4]   2> 1165346 INFO  
(SUITE-TestStressVersions-seed#[541FB31E5FCBB6E2]-worker) [] 
o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id
   [junit4]   2> 1165390 INFO  
(SUITE-TestStressVersions-seed#[541FB31E5FCBB6E2]-worker) [] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@10b1a7d
   [junit4]   2> 1165398 INFO  
(SUITE-TestStressVersions-seed#[541FB31E5FCBB6E2]-worker) [] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10) - Build # 589 - Still Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/589/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseSerialGC

31 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImportWrongSolrUrl

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_695E6346B5610B71-001\tempDir-001\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_695E6346B5610B71-001\tempDir-001\collection1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_695E6346B5610B71-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_695E6346B5610B71-001\tempDir-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_695E6346B5610B71-001\tempDir-001\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_695E6346B5610B71-001\tempDir-001\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_695E6346B5610B71-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_695E6346B5610B71-001\tempDir-001

at 
__randomizedtesting.SeedInfo.seed([695E6346B5610B71:3E0964D4F62714B1]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd$SolrInstance.tearDown(TestSolrEntityProcessorEndToEnd.java:360)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.tearDown(TestSolrEntityProcessorEndToEnd.java:142)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
   

[jira] [Commented] (SOLR-12200) ZkControllerTest failure. Leaking Overseer

2018-05-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474691#comment-16474691
 ] 

Mikhail Khludnev commented on SOLR-12200:
-

https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/620/testReport/junit/junit.framework/TestSuite/org_apache_solr_cloud_ZkControllerTest/

> ZkControllerTest failure. Leaking Overseer
> --
>
> Key: SOLR-12200
> URL: https://issues.apache.org/jira/browse/SOLR-12200
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12200.patch, SOLR-12200.patch, SOLR-12200.patch, 
> SOLR-12200.patch, patch-unit-solr_core.zip, tests-failures.txt, 
> tests-failures.txt.gz, zk.fail.txt.gz
>
>
> Failure seems suspiciously the same. 
>[junit4]   2> 499919 INFO  
> (TEST-ZkControllerTest.testReadConfigName-seed#[BC856CC565039E77]) 
> [n:127.0.0.1:8983_solr] o.a.s.c.Overseer Overseer 
> (id=73578760132362243-127.0.0.1:8983_solr-n_00) closing
>[junit4]   2> 499920 INFO  
> (OverseerStateUpdate-73578760132362243-127.0.0.1:8983_solr-n_00) [
> ] o.a.s.c.Overseer Overseer Loop exiting : 127.0.0.1:8983_solr
>[junit4]   2> 499920 ERROR 
> (OverseerCollectionConfigSetProcessor-73578760132362243-127.0.0.1:8983_solr-n_00)
>  [] o.a.s.c.OverseerTaskProcessor Unable to prioritize overseer
>[junit4]   2> java.lang.InterruptedException: null
>[junit4]   2>at java.lang.Object.wait(Native Method) ~[?:1.8.0_152]
>[junit4]   2>at java.lang.Object.wait(Object.java:502) 
> ~[?:1.8.0_152]
>[junit4]   2>at 
> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1409) 
> ~[zookeeper-3.4.11.jar:3.4
> then it spins in SessionExpiredException, all tests pass but suite fails due 
> to leaking Overseer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 620 - Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/620/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

13 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [Overseer] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.cloud.Overseer.start(Overseer.java:545)  at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
  at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) 
 at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)  
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)  at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)  at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
  at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
  at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [Overseer]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at org.apache.solr.cloud.Overseer.start(Overseer.java:545)
at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)
at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([9058C48CB87B2845]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:303)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 

[JENKINS] Lucene-Solr-repro - Build # 629 - Still Unstable

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/629/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/58/consoleText

[repro] Revision: 92b4a935dc48d58613a158a441b701e09ccc5047

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testDeleteNode -Dtests.seed=5E14A061AE531331 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-TN -Dtests.timezone=America/Guayaquil -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=AutoscalingHistoryHandlerTest 
-Dtests.method=testHistory -Dtests.seed=5E14A061AE531331 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=mk 
-Dtests.timezone=Africa/Lagos -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
02849fb707626cf4312f59324fd894be117787c1
[repro] git fetch
[repro] git checkout 92b4a935dc48d58613a158a441b701e09ccc5047

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SearchRateTriggerIntegrationTest
[repro]   AutoscalingHistoryHandlerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SearchRateTriggerIntegrationTest|*.AutoscalingHistoryHandlerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=5E14A061AE531331 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar-TN 
-Dtests.timezone=America/Guayaquil -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 8461 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest
[repro] git checkout 02849fb707626cf4312f59324fd894be117787c1

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 628 - Still Unstable

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/628/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1542/consoleText

[repro] Revision: a0acc63d020fbe3f50980820c5aba6601785eb68

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=A265B202A3119059 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-SG -Dtests.timezone=America/St_Lucia -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=NodeAddedTriggerTest 
-Dtests.method=testRestoreState -Dtests.seed=A265B202A3119059 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-IN -Dtests.timezone=CTT -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
02849fb707626cf4312f59324fd894be117787c1
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout a0acc63d020fbe3f50980820c5aba6601785eb68

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SearchRateTriggerTest
[repro]   NodeAddedTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SearchRateTriggerTest|*.NodeAddedTriggerTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=A265B202A3119059 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-SG -Dtests.timezone=America/St_Lucia -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 2433 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest
[repro] git checkout 02849fb707626cf4312f59324fd894be117787c1

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12351) Additional args with spaces prevent startup

2018-05-14 Thread Jose Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Ross updated SOLR-12351:
-
Description: 
Adding a system property with a white spaces results in an startup error:
{code:java}
VAR_WITH_SPACES=some value
-Dmy.custom.prop="%VAR_WITH_SPACES%"{code}
{noformat}
Error: Could not find or load main class value{noformat}
It looks like the quotes are removed and this prevents the server from starting.

Also tried using *SOLR_ADDL_ARGS* but then the script crashes with this message:
{code:java}
SET SOLR_ADDL_ARGS=-Dmy.custom.prop="%VAR_WITH_SPACES%"{code}
{noformat}
value""=="" was unexpected at this time.{noformat}

  was:
Adding a system property with a white spaces results in an startup error:
{code:java}
VAR_WITH_SPACES=some value
-Dmy.custom.prop="%VAR_WITH_SPACES%"{code}
{noformat}
Error: Could not find or load main class value{noformat}
It looks like the quotes are removed and this prevents the server from starting.

Also tried using *SOLR_ADDL_ARGS* but then the script crashes with this message:
{noformat}
value""=="" was unexpected at this time.{noformat}


> Additional args with spaces prevent startup
> ---
>
> Key: SOLR-12351
> URL: https://issues.apache.org/jira/browse/SOLR-12351
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.6.3
> Environment: Windows 10
>Reporter: Jose Ross
>Priority: Blocker
>
> Adding a system property with a white spaces results in an startup error:
> {code:java}
> VAR_WITH_SPACES=some value
> -Dmy.custom.prop="%VAR_WITH_SPACES%"{code}
> {noformat}
> Error: Could not find or load main class value{noformat}
> It looks like the quotes are removed and this prevents the server from 
> starting.
> Also tried using *SOLR_ADDL_ARGS* but then the script crashes with this 
> message:
> {code:java}
> SET SOLR_ADDL_ARGS=-Dmy.custom.prop="%VAR_WITH_SPACES%"{code}
> {noformat}
> value""=="" was unexpected at this time.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12351) Additional args with spaces prevent startup

2018-05-14 Thread Jose Ross (JIRA)
Jose Ross created SOLR-12351:


 Summary: Additional args with spaces prevent startup
 Key: SOLR-12351
 URL: https://issues.apache.org/jira/browse/SOLR-12351
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Affects Versions: 6.6.3
 Environment: Windows 10
Reporter: Jose Ross


Adding a system property with a white spaces results in an startup error:
{code:java}
VAR_WITH_SPACES=some value
-Dmy.custom.prop="%VAR_WITH_SPACES%"{code}
{noformat}
Error: Could not find or load main class value{noformat}
It looks like the quotes are removed and this prevents the server from starting.

Also tried using *SOLR_ADDL_ARGS* but then the script crashes with this message:
{noformat}
value""=="" was unexpected at this time.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 627 - Still Unstable

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/627/

[...truncated 57 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/603/consoleText

[repro] Revision: 92b4a935dc48d58613a158a441b701e09ccc5047

[repro] Repro line:  ant test  -Dtestcase=CreateRoutedAliasTest 
-Dtests.method=testV2 -Dtests.seed=94D7E3130FF4A4B2 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=es-NI 
-Dtests.timezone=America/Argentina/Catamarca -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testDeleteNode -Dtests.seed=94D7E3130FF4A4B2 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-EG 
-Dtests.timezone=PRT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
02849fb707626cf4312f59324fd894be117787c1
[repro] git fetch
[repro] git checkout 92b4a935dc48d58613a158a441b701e09ccc5047

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SearchRateTriggerIntegrationTest
[repro]   CreateRoutedAliasTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SearchRateTriggerIntegrationTest|*.CreateRoutedAliasTest" 
-Dtests.showOutput=onerror  -Dtests.seed=94D7E3130FF4A4B2 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ar-EG -Dtests.timezone=PRT 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 4506 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.CreateRoutedAliasTest
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest
[repro] git checkout 02849fb707626cf4312f59324fd894be117787c1

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 219 - Failure

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/219/

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.RangeFacetTest

Error Message:
Error from server at https://127.0.0.1:42076/solr: KeeperErrorCode = Session 
expired for /configs/conf

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:42076/solr: KeeperErrorCode = Session expired 
for /configs/conf
at __randomizedtesting.SeedInfo.seed([77D12CD48A498E32]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.analytics.SolrAnalyticsTestCase.setupCollection(SolrAnalyticsTestCase.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.FullSolrCloudDistribCmdsTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([ADF698D9F533C838]:0)


FAILED:  org.apache.solr.cloud.TestOnReconnectListenerSupport.test

Error Message:
KeeperErrorCode = Session expired for /clusterstate.json

Stack Trace:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /clusterstate.json
at 
__randomizedtesting.SeedInfo.seed([ADF698D9F533C838:25A2A7035BCFA5C0]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:130)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215)
at 

[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474425#comment-16474425
 ] 

Hoss Man commented on SOLR-9480:


Updated patch...

bq. ... some bug where the existing refinement logic isn't picking up the 
function contributions of some shards when the doc count is 0 ...

I tracked down the problems I was seeing to SOLR-12343 -- knowning that bug 
exists, I've made the test "self regulate" itself to avoid it: first it picks a 
random sort option, and then if the sort is one that _might_ result in that bug 
causing incorect stat values, i force the limit option to be big enough that 
all posible field values will be refined & returned to the client.  This let me 
relax a *lot* of the other constraints I had put on the test based on earlier 
misdiagnoses of what was causing these types of problems.

{quote}
* right now the redundent fore & back "size" values (which are the same for 
every slot/bucket) are returned for every bucket ... i'd like to try and figure 
out if i can put that data in the facet "context" to reduce the shard response 
size.
...
* figuring out what/how/where to put info in the facetDebug output
{quote}

...neither of these really seem possible at the moment for similar/overlapping 
reasons...

# There are currently no "per bucket" facet debug information, everything is 
reported at the "Per-Facet" level
# AggValueSources/SlotAccs currently don't know what their "key" is in the 
parent facet/context ...
#* which means there is no "safe" key for the accumulator to use if it tried to 
put data directly into the (parent) facet response, or if it tried to put any 
stat specific debug info in the FacetDebugInfo Map

(This situation of "what is my instances key/name?" is something we probably 
want to solve eventually, if not for optimizing the response size and/or adding 
debuging info then for the "making the queries optional, and inheriting them 
from "ancestor" function instances higher up the tree" type functionality i 
mentioned in my previous comment)

...so i went ahead and removed those nocommits from the patch.



> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-14 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9480:
---
Attachment: SOLR-9480.patch

> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8306) Allow iteration over the term positions of a Match

2018-05-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474278#comment-16474278
 ] 

David Smiley commented on LUCENE-8306:
--

{quote}Could we address this need by calling extract terms on the weight, and 
filtering the positions/offsets of these terms to only keep those that 
intersect with the returned matches?
{quote}
Nice idea but it would be inaccurate, and I think we should aim for accurate 
results with this new API.

For example, if the query is "Game of Thrones" near "Show",  then extracting 
terms is going to find "of" and other words.  But "of" ought to only be a match 
when it's in the phrase "Game of Thrones", not in other places that happen to 
occur in the larger span near "Show".  Our highlighters have failed this for a 
long time but only recently was the UnifiedHighlighter improved to resolve this 
by using the SpanCollector API – LUCENE-8121  (for 7.3, yay).

> Allow iteration over the term positions of a Match
> --
>
> Key: LUCENE-8306
> URL: https://issues.apache.org/jira/browse/LUCENE-8306
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8306.patch, LUCENE-8306.patch
>
>
> For multi-term queries such as phrase queries, the matches API currently just 
> returns information about the span of the whole match.  It would be useful to 
> also expose information about the matching terms within the phrase.  The same 
> would apply to Spans and Interval queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.3-Linux (64bit/jdk-11-ea+5) - Build # 224 - Failure!

2018-05-14 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11752) add gzip to jetty

2018-05-14 Thread Matthew Sporleder (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474223#comment-16474223
 ] 

Matthew Sporleder commented on SOLR-11752:
--

one more try

> add gzip to jetty
> -
>
> Key: SOLR-11752
> URL: https://issues.apache.org/jira/browse/SOLR-11752
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: master (8.0)
>Reporter: Matthew Sporleder
>Priority: Trivial
>  Labels: jetty
> Attachments: SOLR-11752.patch, SOLR-11752.patch
>
>
> with a little bit of typing I am able to add gzip to my solr's jetty, which 
> is a big help for SAN access and completely out-of-band to solr, *and* only 
> happens if the client requests it so I think it is is a good default.
> I will just inline my code to this ticket:
> {code}
> #server/etc/jetty-gzip.xml
> #just download it from here: 
> http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/9.3.0.v20150612/etc/jetty-gzip.xml?av=f
> {code}
> {code}
> #server/modules/gzip.mod
> [depend]
> server
> [xml]
> etc/jetty-gzip.xml
> {code}
> This is where you might want to add an option, but the result should look 
> like this:
> {code}
> #bin/solr
> else
>   SOLR_JETTY_CONFIG+=("--module=http,gzip")
> fi
> {code}
> I can now do this:
> {code}
> curl -vvv --compressed localhost:8983/solr/ > /dev/null
> {code}
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2890
> {code}
> Without:
> {code}
> < Content-Length: 13349
> {code}
> ---
> A regular query:
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2876
> {code}
> Without:
> {code}
> < Content-Length: 17761
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11752) add gzip to jetty

2018-05-14 Thread Matthew Sporleder (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Sporleder updated SOLR-11752:
-
Attachment: SOLR-11752.patch

> add gzip to jetty
> -
>
> Key: SOLR-11752
> URL: https://issues.apache.org/jira/browse/SOLR-11752
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: master (8.0)
>Reporter: Matthew Sporleder
>Priority: Trivial
>  Labels: jetty
> Attachments: SOLR-11752.patch, SOLR-11752.patch
>
>
> with a little bit of typing I am able to add gzip to my solr's jetty, which 
> is a big help for SAN access and completely out-of-band to solr, *and* only 
> happens if the client requests it so I think it is is a good default.
> I will just inline my code to this ticket:
> {code}
> #server/etc/jetty-gzip.xml
> #just download it from here: 
> http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/9.3.0.v20150612/etc/jetty-gzip.xml?av=f
> {code}
> {code}
> #server/modules/gzip.mod
> [depend]
> server
> [xml]
> etc/jetty-gzip.xml
> {code}
> This is where you might want to add an option, but the result should look 
> like this:
> {code}
> #bin/solr
> else
>   SOLR_JETTY_CONFIG+=("--module=http,gzip")
> fi
> {code}
> I can now do this:
> {code}
> curl -vvv --compressed localhost:8983/solr/ > /dev/null
> {code}
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2890
> {code}
> Without:
> {code}
> < Content-Length: 13349
> {code}
> ---
> A regular query:
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2876
> {code}
> Without:
> {code}
> < Content-Length: 17761
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11752) add gzip to jetty

2018-05-14 Thread Matthew Sporleder (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Sporleder updated SOLR-11752:
-
Attachment: (was: SOLR-11752.patch)

> add gzip to jetty
> -
>
> Key: SOLR-11752
> URL: https://issues.apache.org/jira/browse/SOLR-11752
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: master (8.0)
>Reporter: Matthew Sporleder
>Priority: Trivial
>  Labels: jetty
> Attachments: SOLR-11752.patch, SOLR-11752.patch
>
>
> with a little bit of typing I am able to add gzip to my solr's jetty, which 
> is a big help for SAN access and completely out-of-band to solr, *and* only 
> happens if the client requests it so I think it is is a good default.
> I will just inline my code to this ticket:
> {code}
> #server/etc/jetty-gzip.xml
> #just download it from here: 
> http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/9.3.0.v20150612/etc/jetty-gzip.xml?av=f
> {code}
> {code}
> #server/modules/gzip.mod
> [depend]
> server
> [xml]
> etc/jetty-gzip.xml
> {code}
> This is where you might want to add an option, but the result should look 
> like this:
> {code}
> #bin/solr
> else
>   SOLR_JETTY_CONFIG+=("--module=http,gzip")
> fi
> {code}
> I can now do this:
> {code}
> curl -vvv --compressed localhost:8983/solr/ > /dev/null
> {code}
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2890
> {code}
> Without:
> {code}
> < Content-Length: 13349
> {code}
> ---
> A regular query:
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2876
> {code}
> Without:
> {code}
> < Content-Length: 17761
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11752) add gzip to jetty

2018-05-14 Thread Matthew Sporleder (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Sporleder updated SOLR-11752:
-
Attachment: (was: SOLR-11752.patch)

> add gzip to jetty
> -
>
> Key: SOLR-11752
> URL: https://issues.apache.org/jira/browse/SOLR-11752
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: master (8.0)
>Reporter: Matthew Sporleder
>Priority: Trivial
>  Labels: jetty
> Attachments: SOLR-11752.patch, SOLR-11752.patch
>
>
> with a little bit of typing I am able to add gzip to my solr's jetty, which 
> is a big help for SAN access and completely out-of-band to solr, *and* only 
> happens if the client requests it so I think it is is a good default.
> I will just inline my code to this ticket:
> {code}
> #server/etc/jetty-gzip.xml
> #just download it from here: 
> http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/9.3.0.v20150612/etc/jetty-gzip.xml?av=f
> {code}
> {code}
> #server/modules/gzip.mod
> [depend]
> server
> [xml]
> etc/jetty-gzip.xml
> {code}
> This is where you might want to add an option, but the result should look 
> like this:
> {code}
> #bin/solr
> else
>   SOLR_JETTY_CONFIG+=("--module=http,gzip")
> fi
> {code}
> I can now do this:
> {code}
> curl -vvv --compressed localhost:8983/solr/ > /dev/null
> {code}
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2890
> {code}
> Without:
> {code}
> < Content-Length: 13349
> {code}
> ---
> A regular query:
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2876
> {code}
> Without:
> {code}
> < Content-Length: 17761
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8264) Allow an option to rewrite all segments

2018-05-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474157#comment-16474157
 ] 

Jan Høydahl edited comment on LUCENE-8264 at 5/14/18 1:05 PM:
--

[~erickerickson] check out SOLR-10046 which seems to do what you want in <2> 
already through 
{{[UninvertDocValuesMergePolicyFactory|https://lucene.apache.org/solr/7_3_0/solr-core/index.html?org/apache/solr/index/UninvertDocValuesMergePolicyFactory.html]}},
 or do I misunderstand?


was (Author: janhoy):
[~erickerickson] check out SOLR-10046 which seems to do what you want in <2> 
through {{UninvertDocValuesMergePolicyFactory }}already, or do I misunderstand?

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-05-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474157#comment-16474157
 ] 

Jan Høydahl commented on LUCENE-8264:
-

[~erickerickson] check out SOLR-10046 which seems to do what you want in <2> 
through {{UninvertDocValuesMergePolicyFactory }}already, or do I misunderstand?

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4629 - Still Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4629/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/80)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10002_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1526302564014731800", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10002_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1526302564016205750",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{  
 "parent":"shard1",   "stateTimestamp":"1526302564015983850",   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node7":{   "leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/80)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testSplitIntegration_collection_shard2_replica_n3",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":11,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":1,
  "node_name":"127.0.0.1:10002_solr",
  

[jira] [Comment Edited] (LUCENE-8264) Allow an option to rewrite all segments

2018-05-14 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474131#comment-16474131
 ] 

Simon Willnauer edited comment on LUCENE-8264 at 5/14/18 12:46 PM:
---

[~erickerickson]
 # For N-1 -> N we have _org.apache.lucene.index.UpgradeIndexMergePolicy_ ?
 # In order to add DV I think this should be done by wrapping a codec reader. I 
personally think quite an edge case and should be done in the higher level 
application ie. Solr itself. You can do this quite easily with 
_org.apache.lucene.index.OneMergeWrappingMergePolicy_ similar to what we do in 
the soft delete case in _SoftDeletesRetentionMergePolicy_

do I miss something?

 


was (Author: simonw):
[~erickerickson]

 
 # For N-1 -> N we have _org.apache.lucene.index.UpgradeIndexMergePolicy_ ?
 # In order to add DV I think this should be done by wrapping a codec reader. I 
personally think quite an edge case and should be done in the higher level 
application ie. Solr itself. You can do this quite easily with 
_org.apache.lucene.index.OneMergeWrappingMergePolicy_ similar to what we do in 
the soft delete case in _SoftDeletesRetentionMergePolicy_

do I miss something?

 

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-05-14 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474131#comment-16474131
 ] 

Simon Willnauer commented on LUCENE-8264:
-

[~erickerickson]

 
 # For N-1 -> N we have _org.apache.lucene.index.UpgradeIndexMergePolicy_ ?
 # In order to add DV I think this should be done by wrapping a codec reader. I 
personally think quite an edge case and should be done in the higher level 
application ie. Solr itself. You can do this quite easily with 
_org.apache.lucene.index.OneMergeWrappingMergePolicy_ similar to what we do in 
the soft delete case in _SoftDeletesRetentionMergePolicy_

do I miss something?

 

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a ConditionalTokenFilter

2018-05-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474120#comment-16474120
 ] 

Steve Rowe commented on LUCENE-8273:


Okay, I'll finish up the remaining work today.

> Add a ConditionalTokenFilter
> 
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8273-part2.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8306) Allow iteration over the term positions of a Match

2018-05-14 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474116#comment-16474116
 ] 

Adrien Grand commented on LUCENE-8306:
--

bq. I think it's important, particularly if we're talking about highlighting 
terms in very large intervals.

Could we address this need by calling extract terms on the weight, and 
filtering the positions/offsets of these terms to only keep those that 
intersect with the returned matches?

> Allow iteration over the term positions of a Match
> --
>
> Key: LUCENE-8306
> URL: https://issues.apache.org/jira/browse/LUCENE-8306
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8306.patch, LUCENE-8306.patch
>
>
> For multi-term queries such as phrase queries, the matches API currently just 
> returns information about the span of the whole match.  It would be useful to 
> also expose information about the matching terms within the phrase.  The same 
> would apply to Spans and Interval queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12350) Do not use docValues as stored for _str (copy)fields in _default configset

2018-05-14 Thread JIRA
Jan Høydahl created SOLR-12350:
--

 Summary: Do not use docValues as stored for _str (copy)fields in 
_default configset
 Key: SOLR-12350
 URL: https://issues.apache.org/jira/browse/SOLR-12350
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Data-driven Schema
Reporter: Jan Høydahl
 Fix For: 7.4, master (8.0)


When improving data-driven mode in SOLR-9526 we discussed back and forth 
whether to set {{useDocValuesAsStored}} for the {{*_str}} copy of text fields. 
This dynamic field is currently defined as
{code:xml}
{code}
Having lived with the current setting since 7.0, I think it is too noisy to 
return all the _str fields since this is redundant content from the analysed 
original field. Thus I propose to do as [~hossman] initially suggested, and 
explicitly set it to false starting from 7.4:
{code:xml}

{code}
Note that this does not change how things are stored, only whether to display 
these by default. The {{*_str}} fields will still be available for sorting, 
faceting etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8309) Don't use mutable FixedBitSets as live docs Bits

2018-05-14 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474103#comment-16474103
 ] 

Adrien Grand commented on LUCENE-8309:
--

Thanks for having a look. I'll do that.

> Don't use mutable FixedBitSets as live docs Bits
> 
>
> Key: LUCENE-8309
> URL: https://issues.apache.org/jira/browse/LUCENE-8309
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8309.patch
>
>
> Simon mentioned this idea first: it would be nice to not expose mutable 
> fixedbitsets as live docs, which makes it easy for consumers of live docs to 
> resurrect some documents by casting to a FixedBitSet and potentially corrupt 
> the index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12349) Update Web-site "Features" section

2018-05-14 Thread JIRA
Jan Høydahl created SOLR-12349:
--

 Summary: Update Web-site "Features" section
 Key: SOLR-12349
 URL: https://issues.apache.org/jira/browse/SOLR-12349
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: website
Reporter: Jan Høydahl


The page [http://lucene.apache.org/solr/features.html] is a long list of 
features. On the top, the most prominent Solr features are highlighted in a 
"sales-like" manner with a graphical icon and short description. Below that are 
sections with more in-depth descriptions of various aspects of Solr:

*Data Handling, Query, Facets, Discovery, Plugins & Extensions, Statistics & 
Aggregations, Spatial, Rich Content, Performance, Scaling, Admin Interface*

However, this page is lagging behind and should be reworked (not just add to 
the end) to capture and highlight recent features such as:
 * Streaming Expressions
 ** SQL & JDBC
 ** Joins
 ** Statistical engine
 ** Graph
 ** ML
 * Auto scaling & Metrics
 * CDCR and HA
 * LTR
 * Security
 * ...more

I think perhaps a key to success is to dare to remove some content that perhaps 
was impressive at the time but not worthy a mention anymore :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 626 - Still Unstable

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/626/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/10/consoleText

[repro] Revision: a0acc63d020fbe3f50980820c5aba6601785eb68

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMixedBounds -Dtests.seed=18C606FFD125B03C 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=fi-FI -Dtests.timezone=Jamaica -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ChaosMonkeySafeLeaderTest 
-Dtests.seed=18C606FFD125B03C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=fi-FI -Dtests.timezone=Africa/Libreville -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.method=test -Dtests.seed=18C606FFD125B03C -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-SG -Dtests.timezone=Australia/Darwin -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.seed=18C606FFD125B03C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-SG -Dtests.timezone=Australia/Darwin -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsChaosMonkeySafeLeaderTest 
-Dtests.seed=18C606FFD125B03C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-IE -Dtests.timezone=Africa/Bissau -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
a0acc63d020fbe3f50980820c5aba6601785eb68
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout a0acc63d020fbe3f50980820c5aba6601785eb68

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsChaosMonkeySafeLeaderTest
[repro]   ChaosMonkeySafeLeaderTest
[repro]   IndexSizeTriggerTest
[repro]   HdfsRestartWhileUpdatingTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.HdfsChaosMonkeySafeLeaderTest|*.ChaosMonkeySafeLeaderTest|*.IndexSizeTriggerTest|*.HdfsRestartWhileUpdatingTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=18C606FFD125B03C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-IE -Dtests.timezone=Africa/Bissau -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 91947 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
[repro]   0/5 failed: org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   3/5 failed: org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest
[repro] git checkout a0acc63d020fbe3f50980820c5aba6601785eb68

[...truncated 1 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8306) Allow iteration over the term positions of a Match

2018-05-14 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474058#comment-16474058
 ] 

Alan Woodward commented on LUCENE-8306:
---

bq. Can we do without this new API?

I think it's important, particularly if we're talking about highlighting terms 
in very large intervals.  Here's an updated patch.  I've changed the API to use 
a collector interface rather than returning a list, which will make things much 
easier to implement on Spans and Intervals.  I've also implemented it on exact 
and sloppy phrases, including a test against a sloppy phrase with repeats.  
It's ended up simplifying the SloppyPhraseMatcher slightly, as I was trying to 
do too much to report the intervals (and getting inaccurate results in certain 
circumstances, which this API revealed, so it's already been useful!)

> Allow iteration over the term positions of a Match
> --
>
> Key: LUCENE-8306
> URL: https://issues.apache.org/jira/browse/LUCENE-8306
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8306.patch, LUCENE-8306.patch
>
>
> For multi-term queries such as phrase queries, the matches API currently just 
> returns information about the span of the whole match.  It would be useful to 
> also expose information about the matching terms within the phrase.  The same 
> would apply to Spans and Interval queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8306) Allow iteration over the term positions of a Match

2018-05-14 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8306:
--
Attachment: LUCENE-8306.patch

> Allow iteration over the term positions of a Match
> --
>
> Key: LUCENE-8306
> URL: https://issues.apache.org/jira/browse/LUCENE-8306
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8306.patch, LUCENE-8306.patch
>
>
> For multi-term queries such as phrase queries, the matches API currently just 
> returns information about the span of the whole match.  It would be useful to 
> also expose information about the matching terms within the phrase.  The same 
> would apply to Spans and Interval queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8309) Don't use mutable FixedBitSets as live docs Bits

2018-05-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474053#comment-16474053
 ] 

Michael McCandless commented on LUCENE-8309:


+1, but maybe add a class javadoc to {{FixedBits}}?

> Don't use mutable FixedBitSets as live docs Bits
> 
>
> Key: LUCENE-8309
> URL: https://issues.apache.org/jira/browse/LUCENE-8309
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8309.patch
>
>
> Simon mentioned this idea first: it would be nice to not expose mutable 
> fixedbitsets as live docs, which makes it easy for consumers of live docs to 
> resurrect some documents by casting to a FixedBitSet and potentially corrupt 
> the index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 218 - Still Failing

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/218/

No tests ran.

Build Log:
[...truncated 10 lines...]
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress 
git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: remote: Counting objects: 59969   
remote: Counting objects: 151030   
remote: Counting objects: 285816   
remote: Counting objects: 419793   
remote: Counting objects: 530587   
remote: Counting objects: 653208   
remote: Counting objects: 813310   
remote: Counting objects: 963197, done.
remote: Compressing objects:   0% (1/186753)   
remote: Compressing objects:   1% (1868/186753)   
remote: Compressing objects:   2% (3736/186753)   
remote: Compressing objects:   3% (5603/186753)   
remote: Compressing objects:   4% (7471/186753)   
remote: Compressing objects:   5% (9338/186753)   
remote: Compressing objects:   6% (11206/186753)   
remote: Compressing objects:   7% (13073/186753)   
remote: Compressing objects:   8% (14941/186753)   
remote: Compressing objects:   9% (16808/186753)   
remote: Compressing objects:  10% (18676/186753)   
remote: Compressing objects:  11% (20543/186753)   
remote: Compressing objects:  12% (22411/186753)   
remote: Compressing objects:  13% (24278/186753)   
remote: Compressing objects:  14% (26146/186753)   
remote: Compressing objects:  15% (28013/186753)   
remote: Compressing objects:  16% (29881/186753)   
remote: Compressing objects:  17% (31749/186753)   
remote: Compressing objects:  18% (33616/186753)   
remote: Compressing objects:  19% (35484/186753)   
remote: Compressing objects:  20% (37351/186753)   
remote: Compressing objects:  21% (39219/186753)   
remote: Compressing objects:  22% (41086/186753)   
remote: Compressing objects:  23% (42954/186753)   
remote: Compressing objects:  24% (44821/186753)   
remote: Compressing objects:  25% (46689/186753)   
remote: Compressing objects:  26% (48556/186753)   
remote: Compressing objects:  27% (50424/186753)   
remote: Compressing objects:  28% (52291/186753)   
remote: Compressing objects:  29% (54159/186753)   
remote: Compressing objects:  30% (56026/186753)   
remote: Compressing objects:  31% (57894/186753)   
remote: Compressing objects:  32% (59761/186753)   
remote: Compressing objects:  33% (61629/186753)   
remote: Compressing objects:  34% (63497/186753)   
remote: Compressing objects:  35% (65364/186753)   
remote: Compressing objects:  36% (67232/186753)   
remote: Compressing objects:  37% (69099/186753)   
remote: Compressing objects:  38% (70967/186753)   
remote: Compressing objects:  39% (72834/186753)   
remote: Compressing objects:  40% (74702/186753)   
remote: Compressing objects:  41% (76569/186753)   
remote: Compressing objects:  42% (78437/186753)   
remote: Compressing objects:  43% (80304/186753)   
remote: Compressing objects:  44% (82172/186753)   
remote: Compressing objects:  45% (84039/186753)   
remote: Compressing objects:  46% (85907/186753)   
remote: Compressing objects:  47% (87774/186753)   
remote: Compressing objects:  48% (89642/186753)   
remote: Compressing objects:  49% (91509/186753)   
remote: Compressing objects:  50% (93377/186753)   
remote: Compressing objects:  51% (95245/186753)   
remote: Compressing objects:  52% (97112/186753)   
remote: Compressing objects:  53% (98980/186753)   
remote: Compressing objects:  54% (100847/186753)   
remote: Compressing objects:  55% (102715/186753)   
remote: Compressing objects:  56% (104582/186753)   
remote: Compressing objects:  57% (106450/186753)   
remote: Compressing objects:  58% (108317/186753)   
remote: Compressing objects:  59% (110185/186753)   
remote: Compressing objects:  60% (112052/186753)   
remote: Compressing objects:  61% (113920/186753)   
remote: Compressing objects:  62% (115787/186753)   
remote: Compressing objects:  63% (117655/186753)   
remote: Compressing objects:  64% (119522/186753)   
remote: Compressing objects:  65% (121390/186753)   
remote: Compressing objects:  66% (123257/186753)   
remote: Compressing objects:  67% (125125/186753)   
remote: Compressing objects:  68% (126993/186753)   
remote: Compressing objects:  69% (128860/186753)   
remote: Compressing objects:  70% (130728/186753)   
remote: Compressing 

[jira] [Commented] (SOLR-11823) Incorrect number of replica calculation when using Restore Collection API

2018-05-14 Thread Torben Greulich (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474025#comment-16474025
 ] 

Torben Greulich commented on SOLR-11823:


Hi,

we had the same bug restoring a collection on a single node testserver.

{quote}
Solr cloud with available number of nodes:1 is insufficient for restoring a 
collection with 8 shards, total replicas per shard 2 a
{quote}

In production the collection is running on 3 nodes with 8 shards and a 
replication factor of 3. We looked into the solr code and found this in 
*org.apache.solr.cloud.api.collections.RestoreCmd.java*

{code:java}
    int totalReplicasPerShard = numNrtReplicas + numTlogReplicas + 
numPullReplicas;
{code}

So the totalReplicasPerShard is the sum of NrtReplicas, TlogReplicas and 
PullReplicas. So we watched into the *collection_state.json* file from our 
backup and there we found

{code}
{"core-name":{
"pullReplicas":"0",
"replicationFactor":"1",
"shards":{
  "shard1":{
  ...
"router":{"name":"compositeId"},
"maxShardsPerNode":"8",
"autoAddReplicas":"false",
"nrtReplicas":"1",
"tlogReplicas":"1"}}
{code}
 
 replicationFactor=1 and tlogReplicas=1. So with the code snippet from above we 
get *totalReplicasPerShard=2*

After setting *tlogReplicas* to 0 we were able to restore our backup with just 
one node.

> Incorrect number of replica calculation when using Restore Collection API
> -
>
> Key: SOLR-11823
> URL: https://issues.apache.org/jira/browse/SOLR-11823
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.1
>Reporter: Ansgar Wiechers
>Priority: Major
>
> I'm running Solr 7.1 (didn't test other versions) in SolrCloud mode ona a 
> 3-node cluster and tried using the backup/restore API for the first time. 
> Backup worked fine, but when trying to restore the backed-up collection I ran 
> into an unexpected problem with the replication factor setting.
> I expected the command below to restore a backup of the collection "demo" 
> with 3 shards, creating 2 replicas per shard. Instead it's trying to create 6 
> replicas per shard:
> {noformat}
> # curl -s -k 
> 'https://localhost:8983/solr/admin/collections?action=restore=demo=/srv/backup/solr/solr-dev=demo=2=2'
> {
>   "error": {
> "code": 400,
> "msg": "Solr cloud with available number of nodes:3 is insufficient for 
> restoring a collection with 3 shards, total replicas per shard 6 and 
> maxShardsPerNode 2. Consider increasing maxShardsPerNode value OR number 
> ofavailable nodes.",
> "metadata": [
>   "error-class",
>   "org.apache.solr.common.SolrException",
>   "root-error-class",
>   "org.apache.solr.common.SolrException"
> ]
>   },
>   "exception": {
> "rspCode": 400,
> "msg": "Solr cloud with available number of nodes:3 is insufficient for 
> restoring a collection with 3 shards, total replicas per shard 6 and 
> maxShardsPerNode 2. Consider increasing maxShardsPerNode value OR number of 
> available nodes."
>   },
>   "Operation restore caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Solr cloud with available number of nodes:3 is insufficient for restoring a 
> collection with 3 shards, total replicas per shard 6 and maxShardsPerNode 2. 
> Consider increasing maxShardsPerNode value OR number of available nodes.",
>   "responseHeader": {
> "QTime": 28,
> "status": 400
>   }
> }
> {noformat}
> Restoring a collection with only 2 shards tries to create 6 replicas as well, 
> so it looks to me like the restore API multiplies the replication factor with 
> the number of nodes, which is not how the replication factor behaves in other 
> contexts. The 
> [documentation|https://lucene.apache.org/solr/guide/7_1/collections-api.html] 
> also didn't lead me to expect this behavior:
> {quote}
> replicationFactor
>The number of replicas to be created for each shard.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8307) FileSwitchDirectory.checkPendingDeletions is backward

2018-05-14 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474017#comment-16474017
 ] 

Simon Willnauer commented on LUCENE-8307:
-

LGTM

> FileSwitchDirectory.checkPendingDeletions is backward
> -
>
> Key: LUCENE-8307
> URL: https://issues.apache.org/jira/browse/LUCENE-8307
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8307.patch
>
>
> It checks that both directories have pending deletions, while this method 
> should return true if there are any files pending deletion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a ConditionalTokenFilter

2018-05-14 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473985#comment-16473985
 ] 

Alan Woodward commented on LUCENE-8273:
---

Thanks Steve, your patch looks great.

> Add a ConditionalTokenFilter
> 
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8273-part2.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7314 - Still Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7314/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC

24 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [Overseer] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.cloud.Overseer.start(Overseer.java:545)  at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
  at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) 
 at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)  
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)  at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)  at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
  at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
  at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [Overseer]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at org.apache.solr.cloud.Overseer.start(Overseer.java:545)
at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)
at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([A825699F6FB04AB2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:303)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testCollateExtendedResultsWithJsonNl

Error Message:

[JENKINS] Lucene-Solr-SmokeRelease-7.3 - Build # 26 - Failure

2018-05-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.3/26/

No tests ran.

Build Log:
[...truncated 30127 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 230 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.3 MB in 0.01 sec (18.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.3.1-src.tgz...
   [smoker] 32.0 MB in 0.04 sec (832.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.1.tgz...
   [smoker] 73.4 MB in 0.10 sec (767.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.1.zip...
   [smoker] 83.9 MB in 0.10 sec (835.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.3.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6300 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6300 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.1.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6300 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6300 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.1-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.3 MB in 0.01 sec (27.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.3.1-src.tgz...
   [smoker] 55.5 MB in 0.45 sec (124.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.1.tgz...
   [smoker] 154.6 MB in 1.72 sec (89.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.1.zip...
   [smoker] 155.6 MB in 1.31 sec (118.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.3.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.3.1.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.1/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.1/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.1-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.1-java8
   [smoker] *** 

[jira] [Commented] (SOLR-11752) add gzip to jetty

2018-05-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473833#comment-16473833
 ] 

Mikhail Khludnev commented on SOLR-11752:
-

Regardless of the patch, the yetus report is a little bit odd

https://builds.apache.org/job/PreCommit-SOLR-Build/91/console
{quote}
  Finished build.




Archiving artifacts
[description-setter] Description set: SOLR-12307
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Finished: FAILURE
{quote}

> add gzip to jetty
> -
>
> Key: SOLR-11752
> URL: https://issues.apache.org/jira/browse/SOLR-11752
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: master (8.0)
>Reporter: Matthew Sporleder
>Priority: Trivial
>  Labels: jetty
> Attachments: SOLR-11752.patch, SOLR-11752.patch, SOLR-11752.patch
>
>
> with a little bit of typing I am able to add gzip to my solr's jetty, which 
> is a big help for SAN access and completely out-of-band to solr, *and* only 
> happens if the client requests it so I think it is is a good default.
> I will just inline my code to this ticket:
> {code}
> #server/etc/jetty-gzip.xml
> #just download it from here: 
> http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/9.3.0.v20150612/etc/jetty-gzip.xml?av=f
> {code}
> {code}
> #server/modules/gzip.mod
> [depend]
> server
> [xml]
> etc/jetty-gzip.xml
> {code}
> This is where you might want to add an option, but the result should look 
> like this:
> {code}
> #bin/solr
> else
>   SOLR_JETTY_CONFIG+=("--module=http,gzip")
> fi
> {code}
> I can now do this:
> {code}
> curl -vvv --compressed localhost:8983/solr/ > /dev/null
> {code}
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2890
> {code}
> Without:
> {code}
> < Content-Length: 13349
> {code}
> ---
> A regular query:
> With:
> {code}
> < Content-Encoding: gzip
> < Content-Length: 2876
> {code}
> Without:
> {code}
> < Content-Length: 17761
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-10) - Build # 36 - Still Unstable!

2018-05-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/36/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([42FD30B4C2291E31:BBB0A31BFE5C53BB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:284)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)