[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_121) - Build # 6543 - Still Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6543/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "overlay":{ "znodeVersion":0, 
"runtimeLib":{"colltest":{ "name":"colltest", "version":1,  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":0,
"runtimeLib":{"colltest":{
"name":"colltest",
"version":1,  from server:  null
at 
__randomizedtesting.SeedInfo.seed([6881547CA2075783:B0CC792B55DAF223]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+164) - Build # 3433 - Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3433/
Java: 32bit/jdk-9-ea+164 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([7B2738033CB782E8:F000EBD27DB1296C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:437)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-10596) unlque and hll functions don't work after first bucket

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996121#comment-15996121
 ] 

ASF subversion and git services commented on SOLR-10596:


Commit 3546fc3f5826526ca1ca82086732d7a0ab0c1076 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3546fc3 ]

SOLR-10596: fix unique/hll docvalue iterator reuse


> unlque and hll functions don't work after first bucket
> --
>
> Key: SOLR-10596
> URL: https://issues.apache.org/jira/browse/SOLR-10596
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0)
>
> Attachments: SOLR-10596.patch
>
>
> Unless you are sorting by the function, hll and unique fail to work after the 
> first bucket.  I think this only affects unreleased master (7.0-dev) since it 
> looks to have been caused by LUCENE-7407



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8995) Use lambdas to simplify code

2017-05-03 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996110#comment-15996110
 ] 

Mike Drob commented on SOLR-8995:
-

[~noble.paul] - there's been a lot of work on this issue. Can we close it out 
with the appropriate fix version and then if there is additional work handle 
that in follow-on JIRAs?

> Use lambdas to simplify code
> 
>
> Key: SOLR-8995
> URL: https://issues.apache.org/jira/browse/SOLR-8995
> Project: Solr
>  Issue Type: Task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8995.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7864) timeAllowed causing ClassCastException

2017-05-03 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996104#comment-15996104
 ] 

Mike Drob commented on SOLR-7864:
-

Possibly also related to SOLR-9882

> timeAllowed causing ClassCastException
> --
>
> Key: SOLR-7864
> URL: https://issues.apache.org/jira/browse/SOLR-7864
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2
>Reporter: Markus Jelsma
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-7864.patch
>
>
> If timeAllowed kicks in, following exception is thrown and user gets HTTP 500.
> {code}
> 65219 [qtp2096057945-19] ERROR org.apache.solr.servlet.SolrDispatchFilter  [  
>  search] – null:java.lang.ClassCastException: 
> org.apache.solr.response.ResultContext cannot be cast to 
> org.apache.solr.common.SolrDocumentList
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:275)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:497)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10483) Support for IntPointField field types

2017-05-03 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996100#comment-15996100
 ] 

Mike Drob commented on SOLR-10483:
--

Looking at this again, maybe I misunderstand what the goal is and it's not a 
duplicate.

> Support for IntPointField field types
> -
>
> Key: SOLR-10483
> URL: https://issues.apache.org/jira/browse/SOLR-10483
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Suzuki
> Attachments: SOLR-10483.patch
>
>
> Currently the SolrJDBC is unable to handle fields of type Boolean.
> When we query the example techproducts
> {code}
> SELECT popularity FROM techproducts limit 10
> {code}
> We get the following error: cannot be cast to java.lang.String.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10596) unlque and hll functions don't work after first bucket

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996098#comment-15996098
 ] 

ASF subversion and git services commented on SOLR-10596:


Commit 3a7aedcef9a6c9f854508629759bcb9b766d2b08 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3a7aedc ]

SOLR-10596: fix unique/hll docvalue iterator reuse


> unlque and hll functions don't work after first bucket
> --
>
> Key: SOLR-10596
> URL: https://issues.apache.org/jira/browse/SOLR-10596
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0)
>
> Attachments: SOLR-10596.patch
>
>
> Unless you are sorting by the function, hll and unique fail to work after the 
> first bucket.  I think this only affects unreleased master (7.0-dev) since it 
> looks to have been caused by LUCENE-7407



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10483) Support for IntPointField field types

2017-05-03 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996092#comment-15996092
 ] 

Mike Drob commented on SOLR-10483:
--

Done in SOLR-8396, I think.

> Support for IntPointField field types
> -
>
> Key: SOLR-10483
> URL: https://issues.apache.org/jira/browse/SOLR-10483
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Suzuki
> Attachments: SOLR-10483.patch
>
>
> Currently the SolrJDBC is unable to handle fields of type Boolean.
> When we query the example techproducts
> {code}
> SELECT popularity FROM techproducts limit 10
> {code}
> We get the following error: cannot be cast to java.lang.String.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10447) An endpoint to get the alias for a collection

2017-05-03 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996087#comment-15996087
 ] 

Mike Drob commented on SOLR-10447:
--

[~ichattopadhyaya] - is everything here done? can we tag the correct fix 
versions and close this out?

> An endpoint to get the alias for a collection
> -
>
> Key: SOLR-10447
> URL: https://issues.apache.org/jira/browse/SOLR-10447
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-10447.patch
>
>
> We have CREATEALIAS and DELTEALIAS commands. However, there's no way to get 
> the aliases that are already there. I propose that we add a -GETALIAS- 
> LISTALIASES command (Collection API) for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2017-05-03 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996083#comment-15996083
 ] 

Harsh J commented on SOLR-6305:
---

[~thelabdude] is right here in the description BTW. Hadoop APIs let you pass 
any arbitrary replication value via the FileSystem.create API - this overrides 
the local default (dfs.replication config) when passed. In Solr, the API usage 
is effectively asking the NameNode what its default replication factor is, and 
then creates a file with that value, ignoring the local configuration. As a 
result, you cannot specifically control the replication factor of index files 
in Solr without changing the whole HDFS cluster's default.

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8256) Remove cluster setting 'legacy cloud' in 6x.

2017-05-03 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996071#comment-15996071
 ] 

Erick Erickson commented on SOLR-8256:
--

+1

> Remove cluster setting 'legacy cloud' in 6x.
> 
>
> Key: SOLR-8256
> URL: https://issues.apache.org/jira/browse/SOLR-8256
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
> Fix For: 6.0
>
> Attachments: SOLR-8256.patch
>
>
> We don't have the old back compat concerns anymore. It's time to remove this 
> mostly unknown setting and start defaulting to this behavior that starts us 
> down the path of zk=truth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1800 - Unstable

2017-05-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1800/

1 tests failed.
FAILED:  org.apache.solr.update.HardAutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([FC3BA23C1E6D990:B511D55B42C83785]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:896)
at 
org.apache.solr.update.HardAutoCommitTest.testCommitWithin(HardAutoCommitTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:889)
... 40 more




Build Log:
[...truncated 12242 lines...]
   [junit4] Suite: org.apache.solr.update.HardAutoCommitTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1282 - Still Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1282/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:159)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:116)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:107)
  at sun.reflect.GeneratedConstructorAccessor175.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:785)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:847)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1102)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:972)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:855)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:959)  at 
org.apache.solr.core.CoreContainer.lambda$load$7(CoreContainer.java:594)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:159)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:116)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:107)
at sun.reflect.GeneratedConstructorAccessor175.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:785)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:847)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1102)
at org.apache.solr.core.SolrCore.(SolrCore.java:972)
at org.apache.solr.core.SolrCore.(SolrCore.java:855)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:959)
at 
org.apache.solr.core.CoreContainer.lambda$load$7(CoreContainer.java:594)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([61047B16454CC0FB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:302)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 848 - Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/848/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI

Error Message:
Error from server at https://127.0.0.1:58115/solr: Failed to create shard

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:58115/solr: Failed to create shard
at 
__randomizedtesting.SeedInfo.seed([D6BDD8A5CDC7C01F:BC5C56CEF05D7667]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:610)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:447)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:388)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1383)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1134)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:177)
at 
org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI(CustomCollectionTest.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8256) Remove cluster setting 'legacy cloud' in 6x.

2017-05-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995846#comment-15995846
 ] 

Tomás Fernández Löbbe commented on SOLR-8256:
-

"legacyCloud=true" seems to still be the default. Maybe we can change that 
first for 7.0?

> Remove cluster setting 'legacy cloud' in 6x.
> 
>
> Key: SOLR-8256
> URL: https://issues.apache.org/jira/browse/SOLR-8256
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
> Fix For: 6.0
>
> Attachments: SOLR-8256.patch
>
>
> We don't have the old back compat concerns anymore. It's time to remove this 
> mostly unknown setting and start defaulting to this behavior that starts us 
> down the path of zk=truth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10296) Convert existing Ref Guide and post-conversion cleanup

2017-05-03 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995805#comment-15995805
 ] 

Cassandra Targett commented on SOLR-10296:
--

To add to the list, so I don't forget. There are 2 pages in Confluence that use 
the {{excerpt}} macro and 3 places that use the {{excerpt-include}} macro to 
pull in the section from one of the other pages. The fact that these contain 
shared content is lost during the conversion.

The pages are:

* https://cwiki.apache.org/confluence/display/solr/Collection-Specific+Tools 
(collection-specific-tools.adoc) has an excerpt that is pulled into 
https://cwiki.apache.org/confluence/display/solr/Core-Specific+Tools 
(core-specific-tools.adoc) and 
https://cwiki.apache.org/confluence/display/solr/Using+the+Solr+Administration+User+Interface
 (using-the-solr-administration-user-interface.adoc)
** These pages have an additional problem, which is that the content is mostly 
links to other pages, and during conversion those are turned into hard-coded 
links back to Confluence. This might actually be coming from the export from 
Confluence, but it really doesn't matter where it comes from, they'll need to 
be fixed.
* 
https://cwiki.apache.org/confluence/display/solr/Field+Type+Definitions+and+Properties
 (field-type-definitions-and-properties.adoc) has an excerpt that is pulled 
into https://cwiki.apache.org/confluence/display/solr/Defining+Fields 
(defining-fields.adoc)

There are a couple options for resolution IMO:

* Ignore the problem for now. Content is duplicated across pages, but these two 
cases aren't really that onerous.
* Start an "inclusion library" of small bits of content that isn't a standalone 
page, but is designed to be pulled into other pages when they are built as HTML 
and PDF. I want to start doing this eventually, but hoped to avoid it until we 
got everything all set up and stabilized.

The right solution might be a mix of both - ignore it for now, and hopefully 
soon introduce the inclusion library in a more thoughtful way.

> Convert existing Ref Guide and post-conversion cleanup
> --
>
> Key: SOLR-10296
> URL: https://issues.apache.org/jira/browse/SOLR-10296
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> We have developed several tools and scripts for converting the Ref Guide out 
> of Confluence which get us most of the way to a fully converted set of pages. 
> However, we already know that there are several issues that could not be 
> automated.
> From https://github.com/ctargett/refguide-asciidoc-poc/issues/27, we have 
> this list:
> * The conversion process will insert TODOs for several items that we thought 
> might be problematic during conversion; these need to be reviewed and 
> resolved. Some of these items are also covered in the below topics.
> * Block elements in tables. The current version of the PDF creation tool we 
> are using does not handle those properly (see 
> https://github.com/ctargett/refguide-asciidoc-poc/issues/13). In some cases, 
> we should remove the table entirely and present the content in a new way 
> (using, most often, [labled 
> lists|http://asciidoctor.org/docs/user-manual/#labeled-list] instead).
> * Review and (usually) remove huge Tables of Contents from the top of pages. 
> The current design of the online version will automatically create a TOC for 
> the page, we don't need another one and in some cases this TOC was 
> hand-created so can't be removed via conversion.
> * Non-image attachments. Some SVG files will be converted to images, but they 
> should not be treated as images.
> * Failed link conversions. Despite my best attempts, many dummy URLs are 
> treated by Confluence as real URLs (meaning, dummy URLs like 
> {{http://:/solr}} are coded in Confluence's XHTML with  tags). 
> These will be converted as URLs but will throw errors during the conversion 
> process. In some cases, the URLs aren't just these example URLs but are 
> indicative of a real problem that needs to be resolved.
> * Spurious  tags. Some API pages have a list of available calls 
> structured as a list but without being a real ordered or unordered list. 
> These will convert badly. The issue 
> https://github.com/ctargett/refguide-asciidoc-poc/issues/31 has a list of 
> pages where this might be a problem.
> * Appropriate Lead Paragraphs. The stylesheet for HTML pages will make the 
> first paragraph of every HTML page a slightly larger font, by way of 
> introduction. In many cases, the first paragraph is not really ready for that 
> sort of treatment and should be revised to be a more succinct introduction to 
> the feature or further contents of the page.
> More problems may 

[jira] [Closed] (SOLR-5277) Stamp core names on log entries for certain classes

2017-05-03 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya closed SOLR-5277.
--
Resolution: Implemented

> Stamp core names on log entries for certain classes
> ---
>
> Key: SOLR-5277
> URL: https://issues.apache.org/jira/browse/SOLR-5277
> Project: Solr
>  Issue Type: Bug
>  Components: search, update
>Affects Versions: 4.3.1, 4.4, 4.5
>Reporter: Dmitry Kan
> Attachments: SOLR-5277.patch, SOLR-5277.patch
>
>
> It is handy that certain Java classes stamp a [coreName] on a log entry. It 
> would be useful for multicore setup if more classes would stamp this 
> information.
> In particular we came accross a situaion with commits coming in a quick 
> succession to the same multicore shard and found it to be hard time figuring 
> out was it the same core or different cores.
> The classes in question with log sample output:
> o.a.s.c.SolrCore
> 06:57:53.577 [qtp1640764503-13617] INFO  org.apache.solr.core.SolrCore - 
> SolrDeletionPolicy.onCommit: commits:num=2
> 11:53:19.056 [coreLoadExecutor-3-thread-1] INFO  
> org.apache.solr.core.SolrCore - Soft AutoCommit: if uncommited for 1000ms;
> o.a.s.u.UpdateHandler
> 14:45:24.447 [commitScheduler-9-thread-1] INFO  
> org.apache.solr.update.UpdateHandler - start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
> 06:57:53.591 [qtp1640764503-13617] INFO  org.apache.solr.update.UpdateHandler 
> - end_commit_flush
> o.a.s.s.SolrIndexSearcher
> 14:45:24.553 [commitScheduler-7-thread-1] INFO  
> org.apache.solr.search.SolrIndexSearcher - Opening Searcher@1067e5a9 main
> The original question was posted on #solr and on SO:
> http://stackoverflow.com/questions/19026577/how-to-output-solr-core-name-with-log4j



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995801#comment-15995801
 ] 

Erick Erickson commented on SOLR-9867:
--

It's nothing new I don't think. Possibly related to Shalin's comment on 
SOLR-10562?

So let's go ahead and commit and perhaps beast it again when something happens 
on 10562?

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mikhail Khludnev
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch, 
> stdout_90
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3985 - Still Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3985/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([87828FF292F29E53:FD6B0283C0EF3AB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:159)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:861)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:620)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995734#comment-15995734
 ] 

Mikhail Khludnev commented on SOLR-9867:


{code}
   [junit4]   2> 35073 INFO  (qtp1031286021-158) [n:localhost:32820_solr] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=2=4=testCloudExamplePrompt=testCloudExamplePrompt=CREATE=2=json}
 status=0 QTime=5117
   [junit4]   2> 35108 INFO  (qtp1031286021-207) [n:localhost:32820_solr 
c:testCloudExamplePrompt s:shard2 r:core_node1 
x:testCloudExamplePrompt_shard2_replica1] o.a.s.h.SolrConfigHandler Executed 
config commands successfully and persisted to ZK 
[{"set-property":{"updateHandler.autoSoftCommit.maxTime":"3000"}}]
   [junit4]   2> 35111 INFO  (qtp1031286021-207) [n:localhost:32820_solr 
c:testCloudExamplePrompt s:shard2 r:core_node1 
x:testCloudExamplePrompt_shard2_replica1] o.a.s.h.SolrConfigHandler Waiting up 
to 30 secs for 4 replicas to set the property overlay to be of version 0 for 
collection testCloudExamplePrompt
   [junit4]   2> 35112 INFO  (Thread-81) [n:localhost:32820_solr] 
o.a.s.c.SolrCore config update listener called for core 
testCloudExamplePrompt_shard2_replica2
   [junit4]   2> 35115 INFO  
(solrHandlerExecutor-81-thread-1-processing-n:localhost:32820_solr 
x:testCloudExamplePrompt_shard2_replica1 s:shard2 c:testCloudExamplePrompt 
r:core_node1) [n:localhost:32820_solr c:testCloudExamplePrompt s:shard2 
r:core_node1 x:testCloudExamplePrompt_shard2_replica1] 
o.a.s.h.SolrConfigHandler Time elapsed : 0 secs, maxWait 30
   [junit4]   2> 35115 INFO  (Thread-81) [n:localhost:32820_solr] 
o.a.s.c.SolrCore core reload testCloudExamplePrompt_shard2_replica2
{code}
Collection has been created, param update is sent, Zk listener {{(Thread-81)}} 
starts core reload 
{code}
   [junit4]   2> 39099 INFO  (qtp1031286021-155) [n:localhost:32820_solr] 
o.a.s.m.SolrMetricManager Closing metric reporters for 
registry=solr.core.testCloudExamplePrompt.shard2.replica2, tag=149464127
   [junit4]   2> 39099 INFO  (qtp1031286021-155) [n:localhost:32820_solr] 
o.a.s.m.SolrMetricManager Closing metric reporters for 
registry=solr.collection.testCloudExamplePrompt.shard2.leader, tag=149464127
   [junit4]   2> 39106 INFO  
(zkCallback-16-thread-1-processing-n:localhost:32820_solr) 
[n:localhost:32820_solr] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/testCloudExamplePrompt/state.json] for collection 
[testCloudExamplePrompt] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 39106 INFO  (qtp1031286021-221) [n:localhost:32820_solr] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={deleteInstanceDir=true=testCloudExamplePrompt_shard1_replica2=/admin/cores=true=UNLOAD=javabin=2}
 status=0 QTime=107
   [junit4]   2> 39107 INFO  (Thread-81) [n:localhost:32820_solr 
c:testCloudExamplePrompt s:shard2 r:core_node3 
x:testCloudExamplePrompt_shard2_replica2] o.a.s.m.r.SolrJmxReporter JMX 
monitoring for 'solr.core.testCloudExamplePrompt.shard1.replica2' (registry 
'solr.core.testCloudExamplePrompt.shard1.replica2') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@167c2a09
   [junit4]   2> 39108 INFO  (qtp1031286021-187) [n:localhost:32820_solr] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={deleteInstanceDir=true=testCloudExamplePrompt_shard2_replica1=/admin/cores=true=UNLOAD=javabin=2}
 status=0 QTime=77
   [junit4]   2> 39109 WARN  
(zkCallback-16-thread-1-processing-n:localhost:32820_solr) 
[n:localhost:32820_solr] o.a.s.c.LeaderElector Our node is no longer in 
line to be leader
   [junit4]   2> 39109 WARN  
(zkCallback-16-thread-2-processing-n:localhost:32820_solr) 
[n:localhost:32820_solr] o.a.s.c.LeaderElector Our node is no longer in 
line to be leader
   [junit4]   2> 39113 WARN  (Thread-81) [n:localhost:32820_solr 
c:testCloudExamplePrompt s:shard2 r:core_node3 
x:testCloudExamplePrompt_shard2_replica2] o.a.s.c.ZkController listener throws 
error
   [junit4]   2> org.apache.solr.common.SolrException: Unable to reload core 
[testCloudExamplePrompt_shard1_replica2]
   [junit4]   2>at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1197)
   [junit4]   2>at 
org.apache.solr.core.SolrCore.lambda$getConfListener$18(SolrCore.java:2953)
   [junit4]   2>at 
org.apache.solr.cloud.ZkController.lambda$fireEventListeners$4(ZkController.java:2350)
   [junit4]   2>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> Caused by: java.lang.NullPointerException
   [junit4]   2>at 
org.apache.solr.metrics.SolrMetricManager.loadShardReporters(SolrMetricManager.java:1032)
   [junit4]   2>at 
org.apache.solr.metrics.SolrCoreMetricManager.loadReporters(SolrCoreMetricManager.java:89)
   [junit4]   

[jira] [Commented] (SOLR-5277) Stamp core names on log entries for certain classes

2017-05-03 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995717#comment-15995717
 ] 

Mike Drob commented on SOLR-5277:
-

MDC work was committed over several other issues, so I think this is taken care 
of.

> Stamp core names on log entries for certain classes
> ---
>
> Key: SOLR-5277
> URL: https://issues.apache.org/jira/browse/SOLR-5277
> Project: Solr
>  Issue Type: Bug
>  Components: search, update
>Affects Versions: 4.3.1, 4.4, 4.5
>Reporter: Dmitry Kan
> Attachments: SOLR-5277.patch, SOLR-5277.patch
>
>
> It is handy that certain Java classes stamp a [coreName] on a log entry. It 
> would be useful for multicore setup if more classes would stamp this 
> information.
> In particular we came accross a situaion with commits coming in a quick 
> succession to the same multicore shard and found it to be hard time figuring 
> out was it the same core or different cores.
> The classes in question with log sample output:
> o.a.s.c.SolrCore
> 06:57:53.577 [qtp1640764503-13617] INFO  org.apache.solr.core.SolrCore - 
> SolrDeletionPolicy.onCommit: commits:num=2
> 11:53:19.056 [coreLoadExecutor-3-thread-1] INFO  
> org.apache.solr.core.SolrCore - Soft AutoCommit: if uncommited for 1000ms;
> o.a.s.u.UpdateHandler
> 14:45:24.447 [commitScheduler-9-thread-1] INFO  
> org.apache.solr.update.UpdateHandler - start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
> 06:57:53.591 [qtp1640764503-13617] INFO  org.apache.solr.update.UpdateHandler 
> - end_commit_flush
> o.a.s.s.SolrIndexSearcher
> 14:45:24.553 [commitScheduler-7-thread-1] INFO  
> org.apache.solr.search.SolrIndexSearcher - Opening Searcher@1067e5a9 main
> The original question was posted on #solr and on SO:
> http://stackoverflow.com/questions/19026577/how-to-output-solr-core-name-with-log4j



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Ref Guide: Locking Confluence Friday

2017-05-03 Thread Cassandra Targett
I feel like it'd be less confusing if we hold *release* of 6.6.0 until
we have the ref guide content merged and migrated - even though the
release of the Ref Guide will remain as a separate vote for the time
being. And it will of course be easier to merge the branch with master
& backport to branch_6x before the branch_6_6 is made (in the sense of
saving another merge step).

Since it sounds like Joel wants a couple more days anyway, why don't
we see where we're at Monday - if the merge of the ref guide branch
still doesn't have an ETA then maybe we don't hold any longer; if it's
another day, maybe we can hold for that. Once we get rolling with that
process we'll know where the endpoint is.

By the way - in terms of doing the actual work of the manual cleanup
that needs to happen after conversion, Hoss and I plan to start on
Friday but other volunteers are welcome. I'll even throw in a free
screenshare with me (I know!) if you want help to get rolling.

On Wed, May 3, 2017 at 3:04 PM, Ishan Chattopadhyaya
 wrote:
>> I personally would like to push it back at day or two to finish one ticket
>> and get the everything into CHANGES.txt.
> Sure Joel, that works with me.
>
> On Thu, May 4, 2017 at 12:58 AM, Joel Bernstein  wrote:
>>
>> I like the idea of having the new ref guide for 6.6. The Streaming
>> Expressions wiki page has been completely overwhelmed with content at this
>> point and I'd like to see how we can better organize these docs for the 6.6
>> release.
>>
>> I have started a thread on cutting the 6.6 branch. I personally would like
>> to push it back at day or two to finish one ticket and get the everything
>> into CHANGES.txt.
>>
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Wed, May 3, 2017 at 3:12 PM, Ishan Chattopadhyaya
>>  wrote:
>>>
>>> Hi Cassandra,
>>>
>>> > If we start this conversion soon, we'll have the new Ref Guide for
>>> > Solr 6.6, which personally I'd like to do to get acclimated to the
>>> > process before the major edits that we'll need to do for 7.0.
>>>
>>> > ...
>>>
>>> > Once we have the content migrated, we can merge the branch
>>> > "jira/solr-10290" to master and backport it to 6x for releasing the
>>> > Ref Guide for 6.6.
>>>
>>> > ...
>>>
>>> > ... I plan to lock Confluence to edits this week on Friday morning, May
>>> > 5.
>>>
>>> I was planning on cutting a release branch for 6.6 on Thursday, 4th May.
>>> Does it make sense to hold it off until the 10290 work has been backported
>>> (to, say, Monday 8th or Tuesday 9th)? If so, does it make sense to mark
>>> SOLR-10290 as a blocker for 6.6?
>>>
>>> Regards,
>>> Ishan
>>>
>>> On Wed, May 3, 2017 at 1:28 AM, Erick Erickson 
>>> wrote:

 +1, thanks for all your work on this (Hoss too!)

 On Tue, May 2, 2017 at 12:25 PM, Cassandra Targett
  wrote:
 > Hi -
 >
 > I wanted to maximize eyeballs on this, so sending to the list instead
 > of as a comment in an issue.
 >
 > I believe we are at the point where we can lock Confluence to further
 > edits and start the conversion process. If there are no objections, I
 > plan to lock Confluence to edits this week on FRIDAY morning, MAY 5.
 > That gives folks a couple more days to work on stuff you may have been
 > planning.
 >
 > As a reminder of where we are:
 >
 > - We have a build script to make a PDF, and the PDF half the size is
 > is from Confluence
 > - We have a process to vote on and publish the PDF (basically, the
 > same process)
 > - We have agreement on where to host the HTML version
 > - We have documented steps for how to publish the HTML version
 > - We have a Jenkins job that's building the PDF and HTML version daily
 > - We've agreed on how to deal with branches & backports
 >
 > Not all of the subtasks are complete in SOLR-10290, but 2 of those
 > relate to converting the content, and the others are either not
 > showstoppers IMO or depend on getting this part done. Anyone have any
 > other issues they feel need to be resolved before we start flipping
 > this switch?
 >
 > If we start this conversion soon, we'll have the new Ref Guide for
 > Solr 6.6, which personally I'd like to do to get acclimated to the
 > process before the major edits that we'll need to do for 7.0.
 >
 > Once we have the content migrated, we can merge the branch
 > "jira/solr-10290" to master and backport it to 6x for releasing the
 > Ref Guide for 6.6.
 >
 > Again, if there are no objections, I plan to lock Confluence to edits
 > this week on Friday morning, May 5. I intend to put a banner across
 > the page to warn potential editors that it's locked. And assuming all
 > goes well, we won't need to re-enable edits later.
 >
 > Cassandra
 >
 > 

[jira] [Commented] (SOLR-10604) ChaosMonkey doesn't expire session or cause connection losses by default

2017-05-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995681#comment-15995681
 ] 

Mark Miller commented on SOLR-10604:


I have some memory of filing an issue for this or something like this before. I 
think at the time the tests did not beast fully though and so I didn't want to 
add any new complications before fixing the current issues. AFAIK, they are in 
a lot better shape now, but we would want to beast well before enabling this 
stuff.

> ChaosMonkey doesn't expire session or cause connection losses by default
> 
>
> Key: SOLR-10604
> URL: https://issues.apache.org/jira/browse/SOLR-10604
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Tomás Fernández Löbbe
>
> It looks like the intention in ChaosMonkey is that, unless explicitly set, it 
> should randomly cause session expiration and connection loss:
> {code:java}
> if (EXP != null) {
>   expireSessions = EXP; 
> } else {
>   expireSessions = chaosRandom.nextBoolean();
> }
> if (CONN_LOSS != null) {
>   causeConnectionLoss = CONN_LOSS;
> } else {
>   causeConnectionLoss = chaosRandom.nextBoolean();
> }
> {code}
> but {{EXP}} and {{CONN_LOSS}} are defined as:
> {code:java}
>   private static final Boolean CONN_LOSS = 
> Boolean.valueOf(System.getProperty("solr.tests.cloud.cm.connloss", null));
> private static final Boolean EXP = 
> Boolean.valueOf(System.getProperty("solr.tests.cloud.cm.exp", null));
> {code}
> Which makes them "false" (not null) unless those properties are set. 
> The fix is trivial, but we should see how it affects our tests before 
> submitting



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-9867:
-
Attachment: stdout_90

Here's the failing log. Thank heavens for Mark's beasting program

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mikhail Khludnev
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch, 
> stdout_90
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+164) - Build # 19546 - Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19546/
Java: 64bit/jdk-9-ea+164 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([C07877802C234027:A21589C1E3AD2019]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at 
org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter(MetricsHandlerTest.java:201)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 11234 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.MetricsHandlerTest
   [junit4]   2> Creating dataDir: 

[jira] [Created] (SOLR-10604) ChaosMonkey doesn't expire session or cause connection losses by default

2017-05-03 Thread JIRA
Tomás Fernández Löbbe created SOLR-10604:


 Summary: ChaosMonkey doesn't expire session or cause connection 
losses by default
 Key: SOLR-10604
 URL: https://issues.apache.org/jira/browse/SOLR-10604
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Tomás Fernández Löbbe


It looks like the intention in ChaosMonkey is that, unless explicitly set, it 
should randomly cause session expiration and connection loss:
{code:java}
if (EXP != null) {
  expireSessions = EXP; 
} else {
  expireSessions = chaosRandom.nextBoolean();
}
if (CONN_LOSS != null) {
  causeConnectionLoss = CONN_LOSS;
} else {
  causeConnectionLoss = chaosRandom.nextBoolean();
}
{code}
but {{EXP}} and {{CONN_LOSS}} are defined as:
{code:java}
  private static final Boolean CONN_LOSS = 
Boolean.valueOf(System.getProperty("solr.tests.cloud.cm.connloss", null));
private static final Boolean EXP = 
Boolean.valueOf(System.getProperty("solr.tests.cloud.cm.exp", null));
{code}
Which makes them "false" (not null) unless those properties are set. 
The fix is trivial, but we should see how it affects our tests before submitting




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Ref Guide: Locking Confluence Friday

2017-05-03 Thread Ishan Chattopadhyaya
> I personally would like to push it back at day or two to finish one
ticket and get the everything into CHANGES.txt.
Sure Joel, that works with me.

On Thu, May 4, 2017 at 12:58 AM, Joel Bernstein  wrote:

> I like the idea of having the new ref guide for 6.6. The Streaming
> Expressions wiki page has been completely overwhelmed with content at this
> point and I'd like to see how we can better organize these docs for the 6.6
> release.
>
> I have started a thread on cutting the 6.6 branch. I personally would like
> to push it back at day or two to finish one ticket and get the everything
> into CHANGES.txt.
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, May 3, 2017 at 3:12 PM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> Hi Cassandra,
>>
>> > If we start this conversion soon, we'll have the new Ref Guide for
>> > Solr 6.6, which personally I'd like to do to get acclimated to the
>> > process before the major edits that we'll need to do for 7.0.
>>
>> > ...
>>
>> > Once we have the content migrated, we can merge the branch
>> > "jira/solr-10290" to master and backport it to 6x for releasing the
>> > Ref Guide for 6.6.
>>
>> > ...
>>
>> > ... I plan to lock Confluence to edits this week on Friday morning, May
>> 5.
>>
>> I was planning on cutting a release branch for 6.6 on Thursday, 4th May.
>> Does it make sense to hold it off until the 10290 work has been backported
>> (to, say, Monday 8th or Tuesday 9th)? If so, does it make sense to mark
>> SOLR-10290 as a blocker for 6.6?
>>
>> Regards,
>> Ishan
>>
>> On Wed, May 3, 2017 at 1:28 AM, Erick Erickson 
>> wrote:
>>
>>> +1, thanks for all your work on this (Hoss too!)
>>>
>>> On Tue, May 2, 2017 at 12:25 PM, Cassandra Targett
>>>  wrote:
>>> > Hi -
>>> >
>>> > I wanted to maximize eyeballs on this, so sending to the list instead
>>> > of as a comment in an issue.
>>> >
>>> > I believe we are at the point where we can lock Confluence to further
>>> > edits and start the conversion process. If there are no objections, I
>>> > plan to lock Confluence to edits this week on FRIDAY morning, MAY 5.
>>> > That gives folks a couple more days to work on stuff you may have been
>>> > planning.
>>> >
>>> > As a reminder of where we are:
>>> >
>>> > - We have a build script to make a PDF, and the PDF half the size is
>>> > is from Confluence
>>> > - We have a process to vote on and publish the PDF (basically, the
>>> same process)
>>> > - We have agreement on where to host the HTML version
>>> > - We have documented steps for how to publish the HTML version
>>> > - We have a Jenkins job that's building the PDF and HTML version daily
>>> > - We've agreed on how to deal with branches & backports
>>> >
>>> > Not all of the subtasks are complete in SOLR-10290, but 2 of those
>>> > relate to converting the content, and the others are either not
>>> > showstoppers IMO or depend on getting this part done. Anyone have any
>>> > other issues they feel need to be resolved before we start flipping
>>> > this switch?
>>> >
>>> > If we start this conversion soon, we'll have the new Ref Guide for
>>> > Solr 6.6, which personally I'd like to do to get acclimated to the
>>> > process before the major edits that we'll need to do for 7.0.
>>> >
>>> > Once we have the content migrated, we can merge the branch
>>> > "jira/solr-10290" to master and backport it to 6x for releasing the
>>> > Ref Guide for 6.6.
>>> >
>>> > Again, if there are no objections, I plan to lock Confluence to edits
>>> > this week on Friday morning, May 5. I intend to put a banner across
>>> > the page to warn potential editors that it's locked. And assuming all
>>> > goes well, we won't need to re-enable edits later.
>>> >
>>> > Cassandra
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>


Re: 6.6 Release branch

2017-05-03 Thread Ishan Chattopadhyaya
+1. Perhaps I'll cut the branch next week due to the reference guide
changes, as discussed in the "Solr Ref Guide: Locking Confluence Friday"
thread.

On Thu, May 4, 2017 at 12:35 AM, Joel Bernstein  wrote:

> I believe Ishan wanted to cut the branch for 6.6 tomorrow (May 4).
>
> I'm still working on https://issues.apache.org/jira/browse/SOLR-10217,
> which I'd like to get in for 6.6. I believe I can finish this up by end of
> day tomorrow.
>
> How are other people feeling about the 6.6 release.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>


[jira] [Resolved] (LUCENE-7814) DateRangePrefixTree trouble finding years >= 292,000,000

2017-05-03 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-7814.
--
   Resolution: Fixed
Fix Version/s: 6.6
   master (7.0)

> DateRangePrefixTree trouble finding years >= 292,000,000
> 
>
> Key: LUCENE-7814
> URL: https://issues.apache.org/jira/browse/LUCENE-7814
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0), 6.6
>
> Attachments: LUCENE_7814_DateRangePrefixTree_last_millennia_bug.patch
>
>
> Using DateRangePrefixTree (in spatial-extras): When indexing years on or 
> after the last million-year period supported by most computer systems, 
> +292,000,000, a range query like {{\[1970 TO *\]}} will fail to find it.  
> This happens as a result of some off-by-one errors in DateRangePrefixTree.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7814) DateRangePrefixTree trouble finding years >= 292,000,000

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995523#comment-15995523
 ] 

ASF subversion and git services commented on LUCENE-7814:
-

Commit d7583ba67dd75e69d86d982f98a3073a9ca59ebe in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d7583ba ]

LUCENE-7814: DateRangePrefixTree bug in years >= 292M

(cherry picked from commit 40c8ea4)


> DateRangePrefixTree trouble finding years >= 292,000,000
> 
>
> Key: LUCENE-7814
> URL: https://issues.apache.org/jira/browse/LUCENE-7814
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE_7814_DateRangePrefixTree_last_millennia_bug.patch
>
>
> Using DateRangePrefixTree (in spatial-extras): When indexing years on or 
> after the last million-year period supported by most computer systems, 
> +292,000,000, a range query like {{\[1970 TO *\]}} will fail to find it.  
> This happens as a result of some off-by-one errors in DateRangePrefixTree.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7814) DateRangePrefixTree trouble finding years >= 292,000,000

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995517#comment-15995517
 ] 

ASF subversion and git services commented on LUCENE-7814:
-

Commit 40c8ea4b1a82ffab841de87e236ecdb9483381e8 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=40c8ea4 ]

LUCENE-7814: DateRangePrefixTree bug in years >= 292M


> DateRangePrefixTree trouble finding years >= 292,000,000
> 
>
> Key: LUCENE-7814
> URL: https://issues.apache.org/jira/browse/LUCENE-7814
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE_7814_DateRangePrefixTree_last_millennia_bug.patch
>
>
> Using DateRangePrefixTree (in spatial-extras): When indexing years on or 
> after the last million-year period supported by most computer systems, 
> +292,000,000, a range query like {{\[1970 TO *\]}} will fail to find it.  
> This happens as a result of some off-by-one errors in DateRangePrefixTree.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995503#comment-15995503
 ] 

Mikhail Khludnev commented on SOLR-9867:


It sounds like it can go off tomorrow.
[~erickerickson], can you share an output from _our old friend_ by any chance? 

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mikhail Khludnev
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-9867:
--

Assignee: Mikhail Khludnev

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mikhail Khludnev
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995485#comment-15995485
 ] 

Erick Erickson commented on SOLR-9867:
--

Out of 1,000 runs I see only one problem, our old friend ObjectTracker:

Throwable #1: java.lang.AssertionError: ObjectTracker found 5 object(s) that 
were not released!!! [NRTCachingDirectory, NRTCachingDirectory, 
NRTCachingDirectory, MDCAwareThreadPoolExecutor, TransactionLog]

All other 999 showed "BUILD SUCCESSFUL"


> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Ref Guide: Locking Confluence Friday

2017-05-03 Thread Joel Bernstein
I like the idea of having the new ref guide for 6.6. The Streaming
Expressions wiki page has been completely overwhelmed with content at this
point and I'd like to see how we can better organize these docs for the 6.6
release.

I have started a thread on cutting the 6.6 branch. I personally would like
to push it back at day or two to finish one ticket and get the everything
into CHANGES.txt.



Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, May 3, 2017 at 3:12 PM, Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> Hi Cassandra,
>
> > If we start this conversion soon, we'll have the new Ref Guide for
> > Solr 6.6, which personally I'd like to do to get acclimated to the
> > process before the major edits that we'll need to do for 7.0.
>
> > ...
>
> > Once we have the content migrated, we can merge the branch
> > "jira/solr-10290" to master and backport it to 6x for releasing the
> > Ref Guide for 6.6.
>
> > ...
>
> > ... I plan to lock Confluence to edits this week on Friday morning, May
> 5.
>
> I was planning on cutting a release branch for 6.6 on Thursday, 4th May.
> Does it make sense to hold it off until the 10290 work has been backported
> (to, say, Monday 8th or Tuesday 9th)? If so, does it make sense to mark
> SOLR-10290 as a blocker for 6.6?
>
> Regards,
> Ishan
>
> On Wed, May 3, 2017 at 1:28 AM, Erick Erickson 
> wrote:
>
>> +1, thanks for all your work on this (Hoss too!)
>>
>> On Tue, May 2, 2017 at 12:25 PM, Cassandra Targett
>>  wrote:
>> > Hi -
>> >
>> > I wanted to maximize eyeballs on this, so sending to the list instead
>> > of as a comment in an issue.
>> >
>> > I believe we are at the point where we can lock Confluence to further
>> > edits and start the conversion process. If there are no objections, I
>> > plan to lock Confluence to edits this week on FRIDAY morning, MAY 5.
>> > That gives folks a couple more days to work on stuff you may have been
>> > planning.
>> >
>> > As a reminder of where we are:
>> >
>> > - We have a build script to make a PDF, and the PDF half the size is
>> > is from Confluence
>> > - We have a process to vote on and publish the PDF (basically, the same
>> process)
>> > - We have agreement on where to host the HTML version
>> > - We have documented steps for how to publish the HTML version
>> > - We have a Jenkins job that's building the PDF and HTML version daily
>> > - We've agreed on how to deal with branches & backports
>> >
>> > Not all of the subtasks are complete in SOLR-10290, but 2 of those
>> > relate to converting the content, and the others are either not
>> > showstoppers IMO or depend on getting this part done. Anyone have any
>> > other issues they feel need to be resolved before we start flipping
>> > this switch?
>> >
>> > If we start this conversion soon, we'll have the new Ref Guide for
>> > Solr 6.6, which personally I'd like to do to get acclimated to the
>> > process before the major edits that we'll need to do for 7.0.
>> >
>> > Once we have the content migrated, we can merge the branch
>> > "jira/solr-10290" to master and backport it to 6x for releasing the
>> > Ref Guide for 6.6.
>> >
>> > Again, if there are no objections, I plan to lock Confluence to edits
>> > this week on Friday morning, May 5. I intend to put a banner across
>> > the page to warn potential editors that it's locked. And assuming all
>> > goes well, we won't need to re-enable edits later.
>> >
>> > Cassandra
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


Re: Solr Ref Guide: Locking Confluence Friday

2017-05-03 Thread Ishan Chattopadhyaya
Hi Cassandra,

> If we start this conversion soon, we'll have the new Ref Guide for
> Solr 6.6, which personally I'd like to do to get acclimated to the
> process before the major edits that we'll need to do for 7.0.

> ...

> Once we have the content migrated, we can merge the branch
> "jira/solr-10290" to master and backport it to 6x for releasing the
> Ref Guide for 6.6.

> ...

> ... I plan to lock Confluence to edits this week on Friday morning, May 5.

I was planning on cutting a release branch for 6.6 on Thursday, 4th May.
Does it make sense to hold it off until the 10290 work has been backported
(to, say, Monday 8th or Tuesday 9th)? If so, does it make sense to mark
SOLR-10290 as a blocker for 6.6?

Regards,
Ishan

On Wed, May 3, 2017 at 1:28 AM, Erick Erickson 
wrote:

> +1, thanks for all your work on this (Hoss too!)
>
> On Tue, May 2, 2017 at 12:25 PM, Cassandra Targett
>  wrote:
> > Hi -
> >
> > I wanted to maximize eyeballs on this, so sending to the list instead
> > of as a comment in an issue.
> >
> > I believe we are at the point where we can lock Confluence to further
> > edits and start the conversion process. If there are no objections, I
> > plan to lock Confluence to edits this week on FRIDAY morning, MAY 5.
> > That gives folks a couple more days to work on stuff you may have been
> > planning.
> >
> > As a reminder of where we are:
> >
> > - We have a build script to make a PDF, and the PDF half the size is
> > is from Confluence
> > - We have a process to vote on and publish the PDF (basically, the same
> process)
> > - We have agreement on where to host the HTML version
> > - We have documented steps for how to publish the HTML version
> > - We have a Jenkins job that's building the PDF and HTML version daily
> > - We've agreed on how to deal with branches & backports
> >
> > Not all of the subtasks are complete in SOLR-10290, but 2 of those
> > relate to converting the content, and the others are either not
> > showstoppers IMO or depend on getting this part done. Anyone have any
> > other issues they feel need to be resolved before we start flipping
> > this switch?
> >
> > If we start this conversion soon, we'll have the new Ref Guide for
> > Solr 6.6, which personally I'd like to do to get acclimated to the
> > process before the major edits that we'll need to do for 7.0.
> >
> > Once we have the content migrated, we can merge the branch
> > "jira/solr-10290" to master and backport it to 6x for releasing the
> > Ref Guide for 6.6.
> >
> > Again, if there are no objections, I plan to lock Confluence to edits
> > this week on Friday morning, May 5. I intend to put a banner across
> > the page to warn potential editors that it's locked. And assuming all
> > goes well, we won't need to re-enable edits later.
> >
> > Cassandra
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


6.6 Release branch

2017-05-03 Thread Joel Bernstein
I believe Ishan wanted to cut the branch for 6.6 tomorrow (May 4).

I'm still working on https://issues.apache.org/jira/browse/SOLR-10217,
which I'd like to get in for 6.6. I believe I can finish this up by end of
day tomorrow.

How are other people feeling about the 6.6 release.


Joel Bernstein
http://joelsolr.blogspot.com/


[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+164) - Build # 3430 - Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3430/
Java: 64bit/jdk-9-ea+164 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([D40F1EBBF837D269:5F28CD6AB93179ED]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:437)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-10603) No system property or default value specified for org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider

2017-05-03 Thread baychae (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995434#comment-15995434
 ] 

baychae commented on SOLR-10603:


I have implemented as advised and fail to see the affect.

> No system property or default value specified for 
> org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
> --
>
> Key: SOLR-10603
> URL: https://issues.apache.org/jira/browse/SOLR-10603
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.5
> Environment: osx, solr in /usr/local, 
>Reporter: baychae
>
> l have a startup command of:
> bin/solr start -v -cloud -s ~/var/solr
> In ~/var/solr I have zoo.cfg and solr.xml. In solr.xml I have inserted:
>  name="zkCredentialsProvider">${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
> In bin/solr.in.sh" I have:
> SOLR_ZK_CREDS_AND_ACLS="-DzkACLProvider=org.apache.solr.common.cloud.VMParamsAllAndReadonlyDigestZkACLProvider
>  \
>   
> -DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
>  \
>   -DzkDigestUsername=admin-user -DzkDigestPassword=secret \
>   -DzkDigestReadonlyUsername=readonly-user 
> -DzkDigestReadonlyPassword=CHANGEME-READONLY-PASSWORD"
> The error is: 
> 2017-05-03 16:46:44.839 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: No system property or default 
> value specified for 
> org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
>  
> value:${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
>   at 
> org.apache.solr.util.PropertiesUtil.substituteProperty(PropertiesUtil.java:65)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:303)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.core.Config.substituteProperties(Config.java:231)
>   at org.apache.solr.core.SolrXmlConfig.fromConfig(SolrXmlConfig.java:63)
>   at 
> org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:131)
>   at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:113)
>   at 
> org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:141)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.loadNodeConfig(SolrDispatchFilter.java:267)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:235)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:169)
>   at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1404)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1366)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:520)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:41)
>   at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
>   at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:499)
>   at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:147)
>   at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:529)
>   at org.eclipse.jetty.util.Scanner.scan(Scanner.java:392)
>   at 
> org.eclipse.jetty.util.Scanner.doStart(Scanner.java:313)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:150)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> 

[jira] [Commented] (SOLR-10317) Solr Nightly Benchmarks

2017-05-03 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995404#comment-15995404
 ] 

Michael Sun commented on SOLR-10317:


bq. please summarize the conversation here.
Sure. Thanks for reminding.

> Solr Nightly Benchmarks
> ---
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
>  Issue Type: Task
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
> Attachments: Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be 
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr 
> nodes, both in SolrCloud and standalone mode, and record timing information 
> of various operations like indexing, querying, faceting, grouping, 
> replication etc.
> # It should be possible to run them either as an independent suite or as a 
> Jenkins job, and we should be able to report timings as graphs (Jenkins has 
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it 
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md 
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr 
> in some of these frameworks above. However, the benchmarks run are very 
> limited. Any of these can be a starting point, or a new framework can as well 
> be used. The motivation is to be able to cover every functionality of Solr 
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure 
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10317) Solr Nightly Benchmarks

2017-05-03 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995362#comment-15995362
 ] 

Ishan Chattopadhyaya commented on SOLR-10317:
-

bq. preferred medium of communication (Skype, phone etc.).
If discussed over a non-public medium, please summarize the conversation here.

> Solr Nightly Benchmarks
> ---
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
>  Issue Type: Task
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
> Attachments: Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be 
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr 
> nodes, both in SolrCloud and standalone mode, and record timing information 
> of various operations like indexing, querying, faceting, grouping, 
> replication etc.
> # It should be possible to run them either as an independent suite or as a 
> Jenkins job, and we should be able to report timings as graphs (Jenkins has 
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it 
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md 
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr 
> in some of these frameworks above. However, the benchmarks run are very 
> limited. Any of these can be a starting point, or a new framework can as well 
> be used. The motivation is to be able to cover every functionality of Solr 
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure 
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10317) Solr Nightly Benchmarks

2017-05-03 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995357#comment-15995357
 ] 

Michael Sun commented on SOLR-10317:


bq.  Final results are due on 4th May.
That's tomorrow. Cool! 

> Solr Nightly Benchmarks
> ---
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
>  Issue Type: Task
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
> Attachments: Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be 
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr 
> nodes, both in SolrCloud and standalone mode, and record timing information 
> of various operations like indexing, querying, faceting, grouping, 
> replication etc.
> # It should be possible to run them either as an independent suite or as a 
> Jenkins job, and we should be able to report timings as graphs (Jenkins has 
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it 
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md 
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr 
> in some of these frameworks above. However, the benchmarks run are very 
> limited. Any of these can be a starting point, or a new framework can as well 
> be used. The motivation is to be able to cover every functionality of Solr 
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure 
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10317) Solr Nightly Benchmarks

2017-05-03 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995347#comment-15995347
 ] 

Ishan Chattopadhyaya commented on SOLR-10317:
-

bq. I can also co-mentor this project.
Thanks Michael! Final results are due on 4th May.

> Solr Nightly Benchmarks
> ---
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
>  Issue Type: Task
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
> Attachments: Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be 
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr 
> nodes, both in SolrCloud and standalone mode, and record timing information 
> of various operations like indexing, querying, faceting, grouping, 
> replication etc.
> # It should be possible to run them either as an independent suite or as a 
> Jenkins job, and we should be able to report timings as graphs (Jenkins has 
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it 
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md 
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr 
> in some of these frameworks above. However, the benchmarks run are very 
> limited. Any of these can be a starting point, or a new framework can as well 
> be used. The motivation is to be able to cover every functionality of Solr 
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure 
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10317) Solr Nightly Benchmarks

2017-05-03 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995344#comment-15995344
 ] 

Michael Sun commented on SOLR-10317:


[~vivek.nar...@uga.edu] That's great. I will ping you offline to set time.

> Solr Nightly Benchmarks
> ---
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
>  Issue Type: Task
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
> Attachments: Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be 
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr 
> nodes, both in SolrCloud and standalone mode, and record timing information 
> of various operations like indexing, querying, faceting, grouping, 
> replication etc.
> # It should be possible to run them either as an independent suite or as a 
> Jenkins job, and we should be able to report timings as graphs (Jenkins has 
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it 
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md 
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr 
> in some of these frameworks above. However, the benchmarks run are very 
> limited. Any of these can be a starting point, or a new framework can as well 
> be used. The motivation is to be able to cover every functionality of Solr 
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure 
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10603) No system property or default value specified for org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider

2017-05-03 Thread baychae (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995342#comment-15995342
 ] 

baychae commented on SOLR-10603:


Ok will do. 

Thank you for your guidance. 

> No system property or default value specified for 
> org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
> --
>
> Key: SOLR-10603
> URL: https://issues.apache.org/jira/browse/SOLR-10603
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.5
> Environment: osx, solr in /usr/local, 
>Reporter: baychae
>
> l have a startup command of:
> bin/solr start -v -cloud -s ~/var/solr
> In ~/var/solr I have zoo.cfg and solr.xml. In solr.xml I have inserted:
>  name="zkCredentialsProvider">${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
> In bin/solr.in.sh" I have:
> SOLR_ZK_CREDS_AND_ACLS="-DzkACLProvider=org.apache.solr.common.cloud.VMParamsAllAndReadonlyDigestZkACLProvider
>  \
>   
> -DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
>  \
>   -DzkDigestUsername=admin-user -DzkDigestPassword=secret \
>   -DzkDigestReadonlyUsername=readonly-user 
> -DzkDigestReadonlyPassword=CHANGEME-READONLY-PASSWORD"
> The error is: 
> 2017-05-03 16:46:44.839 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: No system property or default 
> value specified for 
> org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
>  
> value:${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
>   at 
> org.apache.solr.util.PropertiesUtil.substituteProperty(PropertiesUtil.java:65)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:303)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.core.Config.substituteProperties(Config.java:231)
>   at org.apache.solr.core.SolrXmlConfig.fromConfig(SolrXmlConfig.java:63)
>   at 
> org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:131)
>   at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:113)
>   at 
> org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:141)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.loadNodeConfig(SolrDispatchFilter.java:267)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:235)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:169)
>   at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1404)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1366)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:520)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:41)
>   at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
>   at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:499)
>   at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:147)
>   at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:529)
>   at org.eclipse.jetty.util.Scanner.scan(Scanner.java:392)
>   at 
> org.eclipse.jetty.util.Scanner.doStart(Scanner.java:313)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:150)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> 

[jira] [Resolved] (SOLR-10603) No system property or default value specified for org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider

2017-05-03 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-10603.
-
Resolution: Not A Bug

> No system property or default value specified for 
> org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
> --
>
> Key: SOLR-10603
> URL: https://issues.apache.org/jira/browse/SOLR-10603
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.5
> Environment: osx, solr in /usr/local, 
>Reporter: baychae
>
> l have a startup command of:
> bin/solr start -v -cloud -s ~/var/solr
> In ~/var/solr I have zoo.cfg and solr.xml. In solr.xml I have inserted:
>  name="zkCredentialsProvider">${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
> In bin/solr.in.sh" I have:
> SOLR_ZK_CREDS_AND_ACLS="-DzkACLProvider=org.apache.solr.common.cloud.VMParamsAllAndReadonlyDigestZkACLProvider
>  \
>   
> -DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
>  \
>   -DzkDigestUsername=admin-user -DzkDigestPassword=secret \
>   -DzkDigestReadonlyUsername=readonly-user 
> -DzkDigestReadonlyPassword=CHANGEME-READONLY-PASSWORD"
> The error is: 
> 2017-05-03 16:46:44.839 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: No system property or default 
> value specified for 
> org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
>  
> value:${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
>   at 
> org.apache.solr.util.PropertiesUtil.substituteProperty(PropertiesUtil.java:65)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:303)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.core.Config.substituteProperties(Config.java:231)
>   at org.apache.solr.core.SolrXmlConfig.fromConfig(SolrXmlConfig.java:63)
>   at 
> org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:131)
>   at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:113)
>   at 
> org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:141)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.loadNodeConfig(SolrDispatchFilter.java:267)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:235)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:169)
>   at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1404)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1366)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:520)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:41)
>   at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
>   at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:499)
>   at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:147)
>   at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:529)
>   at org.eclipse.jetty.util.Scanner.scan(Scanner.java:392)
>   at 
> org.eclipse.jetty.util.Scanner.doStart(Scanner.java:313)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:150)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>   at 
> 

[jira] [Commented] (SOLR-10603) No system property or default value specified for org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider

2017-05-03 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995326#comment-15995326
 ] 

Hoss Man commented on SOLR-10603:
-

{code}
$
{org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
{code}

that syntax tells solr that at runtime you will define a system property with 
the _name_ 
"org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider"
 ... but you have not done that -- you have defined a system property (via the 
{{SOLR_ZK_CREDS_AND_ACLS}} solr.in.sh option) that has a *value* of 
"org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider"

I would strongly suggest you do not modify the {{solr.xml}} from it's original 
form...

{code}
${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider}
{code}

and instead *only* change {{solr.in.sh}}.

If you *are* going to change {{solr.xml}} then you must use the correct syntax. 
 If your goal is to force the solr.xml zkCredentialsProvider option to be 
"org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider"
 - regardless of any system properties -- then you do not want a "$\{...\}" 
variable...


{code}
org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
{code}

If you need further assistance, please ask on the solr-user mailing list -- the 
jira issue tracker is not a user support forum it is for tracking concrete 
bugs/features once they have been confirmed.

http://lucene.apache.org/solr/community.html


> No system property or default value specified for 
> org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
> --
>
> Key: SOLR-10603
> URL: https://issues.apache.org/jira/browse/SOLR-10603
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.5
> Environment: osx, solr in /usr/local, 
>Reporter: baychae
>
> l have a startup command of:
> bin/solr start -v -cloud -s ~/var/solr
> In ~/var/solr I have zoo.cfg and solr.xml. In solr.xml I have inserted:
>  name="zkCredentialsProvider">${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
> In bin/solr.in.sh" I have:
> SOLR_ZK_CREDS_AND_ACLS="-DzkACLProvider=org.apache.solr.common.cloud.VMParamsAllAndReadonlyDigestZkACLProvider
>  \
>   
> -DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
>  \
>   -DzkDigestUsername=admin-user -DzkDigestPassword=secret \
>   -DzkDigestReadonlyUsername=readonly-user 
> -DzkDigestReadonlyPassword=CHANGEME-READONLY-PASSWORD"
> The error is: 
> 2017-05-03 16:46:44.839 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: No system property or default 
> value specified for 
> org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
>  
> value:${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
>   at 
> org.apache.solr.util.PropertiesUtil.substituteProperty(PropertiesUtil.java:65)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:303)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
>   at org.apache.solr.core.Config.substituteProperties(Config.java:231)
>   at org.apache.solr.core.SolrXmlConfig.fromConfig(SolrXmlConfig.java:63)
>   at 
> org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:131)
>   at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:113)
>   at 
> org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:141)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.loadNodeConfig(SolrDispatchFilter.java:267)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:235)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:169)
>   at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1404)
>   at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1366)
>   at 
> 

[jira] [Commented] (SOLR-10295) Decide online location for Ref Guide HTML pages

2017-05-03 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995322#comment-15995322
 ] 

Cassandra Targett commented on SOLR-10295:
--

I made some changes manually to change "apache-solr-reference-guide.html" to 
"index.html", corrected the references I mentioned earlier to [~hossman], and 
uploaded those changes to the test Guide that's online on the website. 

If you go to https://lucene.apache.org/solr/guide/test-10290/, you'll notice 
that the index.html page is served as the default, and it does not append 
"index.html" to the URL. So, if someone lands there and copies/pastes, they 
will not share URLs with "index.html" appended. Thus, I don't believe we need 
any additional RewriteRule in .htaccess at all at this point.

This was additionally a nice test of the process I initially came up with for 
updating an already published Guide. Modifications to that will be coming later 
today.

> Decide online location for Ref Guide HTML pages
> ---
>
> Key: SOLR-10295
> URL: https://issues.apache.org/jira/browse/SOLR-10295
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> One of the biggest decisions we need to make is where to put the new Solr Ref 
> Guide. Confluence at least had the whole web-hosting bits figured out; we 
> have to figure that out on our own.
> An obvious (maybe only to me) choice is to integrate the Ref Guide with the 
> Solr Website. However, due to the size of the Solr Ref Guide (nearly 200 
> pages), I believe trying to publish it solely with existing CMS tools will 
> create problems similar to those described in the Lucene ReleaseTodo when it 
> comes to publishing the Lucene/Solr javadocs (see 
> https://wiki.apache.org/lucene-java/ReleaseTodo#Website_.2B-.3D_javadocs).
> A solution exists already, and it's what is done for the javadocs. From the 
> above link:
> {quote}
> The solution: skip committing javadocs to the source tree, then staging, then 
> publishing, and instead commit javadocs directly to the production tree. 
> Ordinarily this would be problematic, because the CMS wants to keep the 
> production tree in sync with the staging tree, so anything it finds in the 
> production tree that's not in the staging tree gets nuked. However, the CMS 
> has a built-in mechanism to allow exceptions to the 
> keep-production-in-sync-with-staging rule: extpaths.txt.
> {quote}
> This solution (for those who don't know already) is to provide a static text 
> file (extpaths.txt) that includes the javadoc paths that should be presented 
> in production, but which won't exist in CMS staging environments. This way, 
> we can publish HTML files directly to production and they will be preserved 
> when the staging-production trees are synced.
> The rest of the process would be quite similar to what is documented in the 
> ReleaseTodo in sections following the link above - use SVN to update the CMS 
> production site and update extpaths.txt properly. We'd do this in the 
> {{solr}} section of the CMS obviously, and not the {{lucene}} section.
> A drawback to this approach is that we won't have a staging area to view the 
> Guide before publication. Files would be generated and go to production 
> directly. We may want to put a process in place to give some additional 
> confidence that things look right first (someone's people.apache.org 
> directory? a pre-pub validation script that tests...something...?), and agree 
> on what we'd be voting on when a vote to release comes up. However, the CMS 
> is pretty much the only option that I can think of...other ideas are welcome 
> if they might work.
> We also need to agree on URL paths that make sense, considering we'll have a 
> new "site" for each major release - something like 
> {{http://lucene.apache.org/solr/ref-guide/6_1}} might work? Other thoughts 
> are welcome on this point also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10559) Add let and get Streaming Expressions

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995314#comment-15995314
 ] 

ASF subversion and git services commented on SOLR-10559:


Commit 3fdbbb7c6f95b64b73903a0372cae4c339678709 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3fdbbb7 ]

SOLR-10559: Fix TupStream to respect field order


> Add let and get Streaming Expressions
> -
>
> Key: SOLR-10559
> URL: https://issues.apache.org/jira/browse/SOLR-10559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10559.patch, SOLR-10559.patch, SOLR-10559.patch, 
> SOLR-10559.patch
>
>
> The *let* and *get* Streaming Expressions allows the tuples in a stream to be 
> assigned to a variable so it can be used more then once during an expression.
> This builds on the *list* and *cell* expressions (SOLR-10551) 
> Here is the sample syntax:
> {code}
> let(cell(a, expr), 
> cell(b, expr), 
> list(cell(a, get(a)),
>  cell(b, get(b)),
>  cell(correlation, correlate(get(a), fielda, get(b), fieldb)))
> {code}
> In the example above the *let* expression is saving the contents of two 
> *cell* expressions (a, b). The *get* expression is retrieving the tuples and 
> using them later in the expression.
> So for example two facet expressions could be stored in the *let*, and then 
> displayed and correlated later in the expression.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10601) StreamExpressionParser should handle white space around = in named parameters

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995317#comment-15995317
 ] 

ASF subversion and git services commented on SOLR-10601:


Commit 0fb1cdf3518f1206e96a7c9fea2bcc21b521c304 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0fb1cdf ]

SOLR-10601: StreamExpressionParser should handle white space around = in named 
parameters


> StreamExpressionParser should handle white space around = in named parameters
> -
>
> Key: SOLR-10601
> URL: https://issues.apache.org/jira/browse/SOLR-10601
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10601.patch
>
>
> Currently the StreamExpressionParser doesn't recognize a named property if 
> there is whitespace around the =.
> This is a quick fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10536) stats Streaming Expression should work in non-SolrCloud mode

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995315#comment-15995315
 ] 

ASF subversion and git services commented on SOLR-10536:


Commit fcb384192089f0e035e6408da2a4f54e1f19a222 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fcb3841 ]

SOLR-10536: stats Streaming Expression should work in non-SolrCloud mode


> stats Streaming Expression should work in non-SolrCloud mode
> 
>
> Key: SOLR-10536
> URL: https://issues.apache.org/jira/browse/SOLR-10536
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10536.patch, SOLR-10536.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10582) Add Correlation Evaluator

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995311#comment-15995311
 ] 

ASF subversion and git services commented on SOLR-10582:


Commit 16b86f9a7f6b9aa48b1e99369e491a495ee4d5ef in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=16b86f9 ]

SOLR-10582: Add Correlation Evaluator


> Add Correlation Evaluator
> -
>
> Key: SOLR-10582
> URL: https://issues.apache.org/jira/browse/SOLR-10582
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10582.patch, SOLR-10582.patch
>
>
> The Correlation Evaluator computes the Pearson's product-moment correlation 
> of two columns of data.
> Syntax:
> {code}
> let(a=expr, 
> b=expr, 
> c=col(a, fieldName), 
> d=col(b, fieldName), 
> tuple(corr=corr(c,d)))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7815) Remove PostingsHighlighter in 7.0

2017-05-03 Thread David Smiley (JIRA)
David Smiley created LUCENE-7815:


 Summary: Remove PostingsHighlighter in 7.0
 Key: LUCENE-7815
 URL: https://issues.apache.org/jira/browse/LUCENE-7815
 Project: Lucene - Core
  Issue Type: Task
  Components: modules/highlighter
Reporter: David Smiley
Assignee: David Smiley
 Fix For: master (7.0)


The UnifiedHighlighter is derived from the PostingsHighlighter, which should be 
quite obvious to anyone who cares to look at them.  There is no feature in the 
PH that is not also present in the UH.  The PH is marked as 
{{lucene.experimental}} so we may remove it in 7.0.  The upgrade path is pretty 
easy given the API similarity.  By removing the PH, the goal is to ease 
maintenance.  Some issues lately have been applicable to both of these 
highlighters which is annoying to apply twice.  In one case I forgot to.  And 
of course there is user confusion by having both.

What I propose to do in this issue is move {{CustomSeparatorBreakIterator}} and 
{{WholeBreakIterator}} out of the {{postingshighlight}} package into the 
{{uhighlight}} package (or perhaps add a {{common}} or {{util}} should future 
highlighters need them?).  Then of course remove {{postingshighlight}} package.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10559) Add let and get Streaming Expressions

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995307#comment-15995307
 ] 

ASF subversion and git services commented on SOLR-10559:


Commit 91b446e627458b830f2706a23e976cb732e09779 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=91b446e ]

SOLR-10559: Fixed compilation error


> Add let and get Streaming Expressions
> -
>
> Key: SOLR-10559
> URL: https://issues.apache.org/jira/browse/SOLR-10559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10559.patch, SOLR-10559.patch, SOLR-10559.patch, 
> SOLR-10559.patch
>
>
> The *let* and *get* Streaming Expressions allows the tuples in a stream to be 
> assigned to a variable so it can be used more then once during an expression.
> This builds on the *list* and *cell* expressions (SOLR-10551) 
> Here is the sample syntax:
> {code}
> let(cell(a, expr), 
> cell(b, expr), 
> list(cell(a, get(a)),
>  cell(b, get(b)),
>  cell(correlation, correlate(get(a), fielda, get(b), fieldb)))
> {code}
> In the example above the *let* expression is saving the contents of two 
> *cell* expressions (a, b). The *get* expression is retrieving the tuples and 
> using them later in the expression.
> So for example two facet expressions could be stored in the *let*, and then 
> displayed and correlated later in the expression.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10566) Add timeseries Streaming Expression

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995313#comment-15995313
 ] 

ASF subversion and git services commented on SOLR-10566:


Commit 5269fce9f872c3bb8207c8b6e5dbdcafbdd825cd in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5269fce ]

SOLR-10566: Fix error handling


> Add timeseries Streaming Expression
> ---
>
> Key: SOLR-10566
> URL: https://issues.apache.org/jira/browse/SOLR-10566
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10566.patch, SOLR-10566.patch
>
>
> This ticket is to add specific time series support to streaming expressions. 
> Under the covers the timeseries expression will use the json facet API. 
> sample syntax:
> {code}
> timeseries(collection, 
>q="*:*", 
>field="rectime_dt",
>start = "2011-04-01T01:00:00.000Z",
>  end = "2016-04-01T01:00:00.000Z",
>  gap = "+1MONTH",
>count(*),
>sum(price))
> {code}
> This goes nicely with SOLR-10559. The idea is to support expressions that 
> cross-correlate and regress time series data.
> {code}
> let(cell(timeseriesA, timeseries(...)), 
> cell(timeSeriesB, timeseries(...)), 
> list(cell(a, get(timeseriesA)),
>  cell(b, get(timeseriesB)),
>  cell(stats, regres(col(timeseriesA, count(*)), col(timeseriesB, 
> count(*))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10559) Add let and get Streaming Expressions

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995310#comment-15995310
 ] 

ASF subversion and git services commented on SOLR-10559:


Commit e0a94f675710f785aa1340fb548d029fd75be60d in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e0a94f6 ]

SOLR-10559: Updates TupStream and enhances evaluators to work over values in 
the SteamContext


> Add let and get Streaming Expressions
> -
>
> Key: SOLR-10559
> URL: https://issues.apache.org/jira/browse/SOLR-10559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10559.patch, SOLR-10559.patch, SOLR-10559.patch, 
> SOLR-10559.patch
>
>
> The *let* and *get* Streaming Expressions allows the tuples in a stream to be 
> assigned to a variable so it can be used more then once during an expression.
> This builds on the *list* and *cell* expressions (SOLR-10551) 
> Here is the sample syntax:
> {code}
> let(cell(a, expr), 
> cell(b, expr), 
> list(cell(a, get(a)),
>  cell(b, get(b)),
>  cell(correlation, correlate(get(a), fielda, get(b), fieldb)))
> {code}
> In the example above the *let* expression is saving the contents of two 
> *cell* expressions (a, b). The *get* expression is retrieving the tuples and 
> using them later in the expression.
> So for example two facet expressions could be stored in the *let*, and then 
> displayed and correlated later in the expression.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10559) Add let and get Streaming Expressions

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995308#comment-15995308
 ] 

ASF subversion and git services commented on SOLR-10559:


Commit e3ca586b2ac973739c5d2b04a4e1206cd0c951c5 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3ca586 ]

SOLR-10559: Fix precommit


> Add let and get Streaming Expressions
> -
>
> Key: SOLR-10559
> URL: https://issues.apache.org/jira/browse/SOLR-10559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10559.patch, SOLR-10559.patch, SOLR-10559.patch, 
> SOLR-10559.patch
>
>
> The *let* and *get* Streaming Expressions allows the tuples in a stream to be 
> assigned to a variable so it can be used more then once during an expression.
> This builds on the *list* and *cell* expressions (SOLR-10551) 
> Here is the sample syntax:
> {code}
> let(cell(a, expr), 
> cell(b, expr), 
> list(cell(a, get(a)),
>  cell(b, get(b)),
>  cell(correlation, correlate(get(a), fielda, get(b), fieldb)))
> {code}
> In the example above the *let* expression is saving the contents of two 
> *cell* expressions (a, b). The *get* expression is retrieving the tuples and 
> using them later in the expression.
> So for example two facet expressions could be stored in the *let*, and then 
> displayed and correlated later in the expression.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10559) Add let and get Streaming Expressions

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995312#comment-15995312
 ] 

ASF subversion and git services commented on SOLR-10559:


Commit 0ccd3f4b8bedc27c6e6e430458c6331d5ea5e636 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0ccd3f4 ]

SOLR-10559: Remove debuggin


> Add let and get Streaming Expressions
> -
>
> Key: SOLR-10559
> URL: https://issues.apache.org/jira/browse/SOLR-10559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10559.patch, SOLR-10559.patch, SOLR-10559.patch, 
> SOLR-10559.patch
>
>
> The *let* and *get* Streaming Expressions allows the tuples in a stream to be 
> assigned to a variable so it can be used more then once during an expression.
> This builds on the *list* and *cell* expressions (SOLR-10551) 
> Here is the sample syntax:
> {code}
> let(cell(a, expr), 
> cell(b, expr), 
> list(cell(a, get(a)),
>  cell(b, get(b)),
>  cell(correlation, correlate(get(a), fielda, get(b), fieldb)))
> {code}
> In the example above the *let* expression is saving the contents of two 
> *cell* expressions (a, b). The *get* expression is retrieving the tuples and 
> using them later in the expression.
> So for example two facet expressions could be stored in the *let*, and then 
> displayed and correlated later in the expression.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_121) - Build # 6542 - Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6542/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.EchoParamsTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001\snapshot_metadata:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001\snapshot_metadata

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001\snapshot_metadata:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001\snapshot_metadata
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001

at __randomizedtesting.SeedInfo.seed([2758FD905B7B1FD4]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11456 lines...]
   [junit4] Suite: org.apache.solr.EchoParamsTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.EchoParamsTest_2758FD905B7B1FD4-001\init-core-data-001
   [junit4]   2> 510222 WARN  
(SUITE-EchoParamsTest-seed#[2758FD905B7B1FD4]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1
   [junit4]   2> 510222 INFO  
(SUITE-EchoParamsTest-seed#[2758FD905B7B1FD4]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields
   [junit4]   2> 510231 INFO  
(SUITE-EchoParamsTest-seed#[2758FD905B7B1FD4]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 510237 INFO  
(SUITE-EchoParamsTest-seed#[2758FD905B7B1FD4]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 510239 INFO  
(SUITE-EchoParamsTest-seed#[2758FD905B7B1FD4]-worker) [] 
o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: 
[/C:/Users/jenkins/workspace/Lucene-Solr-master-Windows/solr/core/src/test-files/solr/collection1/lib,
 

[jira] [Resolved] (SOLR-10583) Add 'join' as a new type of domain change in JSON Facets

2017-05-03 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-10583.
-
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.6

> Add 'join' as a new type of domain change in JSON Facets
> 
>
> Key: SOLR-10583
> URL: https://issues.apache.org/jira/browse/SOLR-10583
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10583.patch, SOLR-10583.patch, SOLR-10583.patch
>
>
> Add support for a new (query) {{join}} option when specifying a {{domain}} 
> for a JSON Facet.
> Suggested syntax...
> {code}
> ...
> domain : { join : { from : field_foo,
> to : field_bar
> }
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10583) Add 'join' as a new type of domain change in JSON Facets

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995292#comment-15995292
 ] 

ASF subversion and git services commented on SOLR-10583:


Commit 083051899794e509b1fbd7cf5fc475094c3b452a in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0830518 ]

SOLR-10583: JSON Faceting now supports a query time 'join' domain change option

(cherry picked from commit 15e1c5d34f69fa2662b5299dce6fc808854f8ba3)


> Add 'join' as a new type of domain change in JSON Facets
> 
>
> Key: SOLR-10583
> URL: https://issues.apache.org/jira/browse/SOLR-10583
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-10583.patch, SOLR-10583.patch, SOLR-10583.patch
>
>
> Add support for a new (query) {{join}} option when specifying a {{domain}} 
> for a JSON Facet.
> Suggested syntax...
> {code}
> ...
> domain : { join : { from : field_foo,
> to : field_bar
> }
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10338) Configure SecureRandom non blocking for tests.

2017-05-03 Thread Mihaly Toth (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mihaly Toth updated SOLR-10338:
---
Attachment: SOLR-10338.patch

I tried again setting {{java.security.egd}} from a class itself. It is possible 
to set it, and it does have an effect on the algorithm selected given that the 
Sun security provider is loaded after that. Please see my opening comment on 
this Jira for code snippets.

So I believe it is possible. The other question is if it is needed. The only 
motivation now are other projects that are depending on {{SolrTestCaseJ4}}. 
With setting the property if it is unset, there is a chance that those projects 
are not broken.

In Solr test however I would not rely solely on this solution because this is 
not bullet proof. If any other test not subclassing {{SolrTestCaseJ4}} AND 
using directly or indirectly {{SecureRandom}} would run before 
{{SolrTestCaseJ4}} subclassed tests then all these later tests would fail. This 
is because the algorithm is decided on first usage of {{SecureRandom}}. I did a 
test run having only setting the property in setup and I have got couple of 
failures.

The attached patch uses {{test.solr.allow.any.securerandom}} system property to 
allow validation of the algorithm defaulting to true. In case of algorithm 
verification I blacklisted {{NativePRNG}} and {{NativePRNGBlocking}}. According 
to 
https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SUNProvider
 those are the algorithms using {{/dev/random}} in one way or another. 

Regarding the question on how stable it is to overwrite this property from code 
I found no mentioning of any limitation where the property could be set. Among 
other page I read the javadoc of 
https://github.com/frohoff/jdk8u-jdk/blob/master/src/solaris/classes/sun/security/provider/NativePRNG.java
 . So I do not think this behaviour would change in any future versions of Java 
8. NativePRNG in Java 9 is only a skeleton implementation. And we have the test 
suite as a last resort to tell if such unexpected event occured.

> Configure SecureRandom non blocking for tests.
> --
>
> Key: SOLR-10338
> URL: https://issues.apache.org/jira/browse/SOLR-10338
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mihaly Toth
>Assignee: Mark Miller
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10338.patch, SOLR-10338.patch, SOLR-10338.patch, 
> SOLR-10338.patch, SOLR-10338.patch
>
>
> It would be best if SecureRandom could be made non blocking. In that case we 
> could get rid of random entropy exhaustion issue related to all usages of 
> SecureRandom.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release planning for 7.0

2017-05-03 Thread David Smiley
+1

On Wed, May 3, 2017 at 11:56 AM Anshum Gupta  wrote:

> With 6.6 in the pipeline, I think sometime in June would be a good time to
> cut a release branch. What do all of you think?
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-7041) Remove defaultSearchField and solrQueryParser from schema

2017-05-03 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995279#comment-15995279
 ] 

David Smiley commented on SOLR-7041:


+1 patch LGTM

> Remove defaultSearchField and solrQueryParser from schema
> -
>
> Key: SOLR-7041
> URL: https://issues.apache.org/jira/browse/SOLR-7041
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.0
>
> Attachments: SOLR-7041-defaultOperator.patch, SOLR-7041-df.patch, 
> SOLR-7041.patch
>
>
> The two tags {{}} and {{solrQueryParser}} were deprecated 
> in Solr3.6 (SOLR-2724). Time to nuke them from code and {{schema.xml}} in 5.0?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10583) Add 'join' as a new type of domain change in JSON Facets

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995275#comment-15995275
 ] 

ASF subversion and git services commented on SOLR-10583:


Commit 15e1c5d34f69fa2662b5299dce6fc808854f8ba3 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=15e1c5d ]

SOLR-10583: JSON Faceting now supports a query time 'join' domain change option


> Add 'join' as a new type of domain change in JSON Facets
> 
>
> Key: SOLR-10583
> URL: https://issues.apache.org/jira/browse/SOLR-10583
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-10583.patch, SOLR-10583.patch, SOLR-10583.patch
>
>
> Add support for a new (query) {{join}} option when specifying a {{domain}} 
> for a JSON Facet.
> Suggested syntax...
> {code}
> ...
> domain : { join : { from : field_foo,
> to : field_bar
> }
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 312 - Failure

2017-05-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/312/

No tests ran.

Build Log:
[...truncated 25997 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (24.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.6.0-src.tgz...
   [smoker] 29.6 MB in 0.04 sec (693.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.6.0.tgz...
   [smoker] 67.6 MB in 0.09 sec (737.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.6.0.zip...
   [smoker] 78.0 MB in 0.10 sec (751.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.6.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6248 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.6.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6248 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.6.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 229 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (40.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.6.0-src.tgz...
   [smoker] 38.4 MB in 0.05 sec (800.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.6.0.tgz...
   [smoker] 135.0 MB in 0.17 sec (813.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.6.0.zip...
   [smoker] 136.1 MB in 0.17 sec (793.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.6.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.6.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.6.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.6.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.6.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.6.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.6.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=13591). Happy searching!
   [smoker] 
   [smoker] 
   

[jira] [Created] (SOLR-10603) No system property or default value specified for org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider

2017-05-03 Thread baychae (JIRA)
baychae created SOLR-10603:
--

 Summary: No system property or default value specified for 
org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
 Key: SOLR-10603
 URL: https://issues.apache.org/jira/browse/SOLR-10603
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.5
 Environment: osx, solr in /usr/local, 
Reporter: baychae


l have a startup command of:

bin/solr start -v -cloud -s ~/var/solr

In ~/var/solr I have zoo.cfg and solr.xml. In solr.xml I have inserted:

${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}

In bin/solr.in.sh" I have:

SOLR_ZK_CREDS_AND_ACLS="-DzkACLProvider=org.apache.solr.common.cloud.VMParamsAllAndReadonlyDigestZkACLProvider
 \
  
-DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
 \
  -DzkDigestUsername=admin-user -DzkDigestPassword=secret \
  -DzkDigestReadonlyUsername=readonly-user 
-DzkDigestReadonlyPassword=CHANGEME-READONLY-PASSWORD"

The error is: 

2017-05-03 16:46:44.839 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: No system property or default value 
specified for 
org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
 
value:${org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider}
at 
org.apache.solr.util.PropertiesUtil.substituteProperty(PropertiesUtil.java:65)
at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:303)
at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
at org.apache.solr.util.DOMUtil.substituteProperties(DOMUtil.java:311)
at org.apache.solr.core.Config.substituteProperties(Config.java:231)
at org.apache.solr.core.SolrXmlConfig.fromConfig(SolrXmlConfig.java:63)
at 
org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:131)
at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:113)
at 
org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:141)
at 
org.apache.solr.servlet.SolrDispatchFilter.loadNodeConfig(SolrDispatchFilter.java:267)
at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:235)
at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:169)
at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1404)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1366)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:520)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:41)
at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:499)
at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:147)
at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:529)
at org.eclipse.jetty.util.Scanner.scan(Scanner.java:392)
at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:313)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:150)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:561)
at 
org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:236)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start(Server.java:422)
at 

[jira] [Comment Edited] (SOLR-10359) User Interactions Logging Module

2017-05-03 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15942908#comment-15942908
 ] 

Alessandro Benedetti edited comment on SOLR-10359 at 5/3/17 4:27 PM:
-

Hi [~arafalov],
let me answer to your considerations :

bq.Logging the search queries and results received (either as count or as 
specific ids). And - maybe - statistics on that.

Main intention here was to store "impressions" , the results we are showing to 
the users for each query.
This is a required component when calculating Click Through Rate.
In my opinion, logging queries could even be a third aspect, not strictly 
related to the scope of this Jira issue, but definetly related to the component.
Indeed could end up in a sibling module, with also different storage and data 
structure, but I agree it is an aspect definitely worth to consider, I would 
love to have it as well.

bq.The second one seems to be happening well out of Solr control (UI clicks, 
what user selected, etc).


Absolutely correct, and actually this is the main point of this new feature :
allow Solr to process out of the box, users interactions with Solr results, to 
have a better evaluation of how the search engine behaves.
So it is actually a way of giving Solr the control over events happening 
outside, but strictly realted to relevancy measurement.

bq.The second one seems to be happening well out of Solr control (UI clicks, 
what user selected, etc).

Main point is to expose an easy REST endpoint out of the box, that will allow 
users to post events to Solr and then be able to evaluate their search engine ( 
and compare different A/B tests) through different evaluation metrics.

bq.I am not sure if that fits into Solr itself 

I do believe could be a really nice addition in Solr. I see a lot of Solr users 
struggling in identifying the quality of their search engine, and how good the 
system is behaving generally or per specific queries.
Giving them an ability to evaluate and compare their system with a set of 
estabilished evaluation metrics could be very useful.
This will allow anyone to do it without installing additional software or 
building complicated collections or abstractions not really easy to do for 
everyone.
The main scope is to simplify the approach and make it super easy to do it out 
of the box.

bq.Commercial platforms (such as Fusion) might be integrating it, but they 
control more of a stack

This is correct, but according to what I know, if the data collected may be 
similar, the target is completely different.
I think at the moment signals processing in Fusion is used as an additional 
factor for relevancy tuning ( basically pushing well clicked documents up the 
ranking).
The first scope of this new feature will be focused on evaluation of relevancy, 
not tuning it.
I agree of course that as soon as the component(s) is/are in place, this will 
open the doors to a big new world and we could add additional ways of using the 
data collected ( LTR training set, relevancy tuning, ect ect).
But at the moment this is out of scope.

Answering [~diegoceccarelli] :
1) UserInteraction is probably a better name instead of Users Events, I like it 
!

bq.how to create a unique search id? (should be responsability of solr? I think 
yes)

If I get what you refer to, I do believe Solr should be responsible to generate 
a query Id per user interaction but also store the original query "plain" as 
will be quite interesting to analyse the evaluation metrics on a query basis ( 
in a human readable way).  To perform the aggregations I do agree that a clever 
Id could make them more performant.
Were you referring to this as " search Id" ?

bq.if I want to use metric like the CTR (i.e., Click Through Rate, number of 
clicks / number of impressions) in the scoring formula how can I do that 
without joining the two collections? ( (maybe that could be a way to 'import' a 
particular metric into the main collection? )??

A) For a query time approach, the quickest thing that comes to my head is 
defining a custom function query, that will access the data structure we 
defined to store the users interactions and calculate the aggregation on a per 
document basis.
This could imply we need to design the underlying data structure in a different 
way, as I don't know if running those aggregations on a Lucene index will be 
fast enough ( as the function query will need to be calculate for every 
document).

B) For an indexing time approach, we could :
1) design an additional data structure in the index, specifically designed to 
map a docId to the metric value. This could potentially be in the docValues 
format with some tweak.
2) design a demon ( I need to deeply take a look to them as I have not yet used 
them in Solr, but I saw them related to Streaming expressions) that from time 
to time, runs in background and generate this data structure 

[jira] [Updated] (SOLR-10359) User Interactions Logging Module

2017-05-03 Thread Alessandro Benedetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Benedetti updated SOLR-10359:

Summary: User Interactions Logging Module  (was: User Interactions Logger 
Component)

> User Interactions Logging Module
> 
>
> Key: SOLR-10359
> URL: https://issues.apache.org/jira/browse/SOLR-10359
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alessandro Benedetti
>  Labels: CTR, evaluation
>
> *Introduction*
> Being able to evaluate the quality of your search engine is becoming more and 
> more important day by day.
> This issue is to put a milestone to integrate online evaluation metrics with 
> Solr.
> *Scope*
> Scope of this issue is to provide a set of components able to :
> 1) Collect Search Results impressions ( results shown per query)
> 2) Collect Users interactions ( user interactions on the search results per 
> query e.g. clicks, bookmarking,ect )
> 3) Calculate evaluation metrics on demand, such as Click Through Rate, DCG ...
> *Technical Design*
> A SearchComponent can be designed :
> *UsersEventsLoggerComponent*
> A property (such as storeDir) will define where the data collected will be 
> stored.
> Different data structures can be explored, to keep it simple, a first 
> implementation can be a Lucene Index.
> *Data Model*
> The user event can be modelled in the following way :
>  - the user query the event is related to
>  - the ID of the search result involved in the interaction
>  - the position in the ranking of the search result involved 
> in the interaction
>  - time when the interaction happened
>  - 0 for impressions, a value between 1-5 to identify the 
> type of user event, the semantic will depend on the domain and use cases
>  - this can identify a variant, in A/B testing
> *Impressions Logging*
> When the SearchComponent  is assigned to a request handler, everytime it 
> processes a request and return to the user a result set for a query, the 
> component will collect the impressions ( results returned) and index them in 
> the auxiliary lucene index.
> This will happen in parallel as soon as you return the results to avoid 
> affecting the query time.
> Of course an impact on CPU load and memory is expected, will be interesting 
> to minimise it.
> * User Events Logging *
> An UpdateHandler will be exposed to accept POST requests and collect user 
> events.
> Everytime a request is sent, the user event will be indexed in the underline 
> auxiliary Lucene Index.
> * Stats Calculation *
> A RequestHandler will be exposed to be able to calculate stats and 
> aggregations for the metrics :
> /evaluation?metric=ctr=query=testA,testB
> This request could calculate the CTR for our testA and testB to compare.
> Showing stats in total and per query ( to highlight the queries with 
> lower/higher CTR).
> The calculations will happen separating the  for an easy 
> comparison.
> Will be important to keep it as simple as possible for a first version, to 
> then extend it as much as we like



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10295) Decide online location for Ref Guide HTML pages

2017-05-03 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995146#comment-15995146
 ] 

Cassandra Targett commented on SOLR-10295:
--

bq. but once things have settled (i.e. there is a released guide with an HTML 
format) I don't think we should link from the website to development versions 
of the ref guide

OK.

> Decide online location for Ref Guide HTML pages
> ---
>
> Key: SOLR-10295
> URL: https://issues.apache.org/jira/browse/SOLR-10295
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> One of the biggest decisions we need to make is where to put the new Solr Ref 
> Guide. Confluence at least had the whole web-hosting bits figured out; we 
> have to figure that out on our own.
> An obvious (maybe only to me) choice is to integrate the Ref Guide with the 
> Solr Website. However, due to the size of the Solr Ref Guide (nearly 200 
> pages), I believe trying to publish it solely with existing CMS tools will 
> create problems similar to those described in the Lucene ReleaseTodo when it 
> comes to publishing the Lucene/Solr javadocs (see 
> https://wiki.apache.org/lucene-java/ReleaseTodo#Website_.2B-.3D_javadocs).
> A solution exists already, and it's what is done for the javadocs. From the 
> above link:
> {quote}
> The solution: skip committing javadocs to the source tree, then staging, then 
> publishing, and instead commit javadocs directly to the production tree. 
> Ordinarily this would be problematic, because the CMS wants to keep the 
> production tree in sync with the staging tree, so anything it finds in the 
> production tree that's not in the staging tree gets nuked. However, the CMS 
> has a built-in mechanism to allow exceptions to the 
> keep-production-in-sync-with-staging rule: extpaths.txt.
> {quote}
> This solution (for those who don't know already) is to provide a static text 
> file (extpaths.txt) that includes the javadoc paths that should be presented 
> in production, but which won't exist in CMS staging environments. This way, 
> we can publish HTML files directly to production and they will be preserved 
> when the staging-production trees are synced.
> The rest of the process would be quite similar to what is documented in the 
> ReleaseTodo in sections following the link above - use SVN to update the CMS 
> production site and update extpaths.txt properly. We'd do this in the 
> {{solr}} section of the CMS obviously, and not the {{lucene}} section.
> A drawback to this approach is that we won't have a staging area to view the 
> Guide before publication. Files would be generated and go to production 
> directly. We may want to put a process in place to give some additional 
> confidence that things look right first (someone's people.apache.org 
> directory? a pre-pub validation script that tests...something...?), and agree 
> on what we'd be voting on when a vote to release comes up. However, the CMS 
> is pretty much the only option that I can think of...other ideas are welcome 
> if they might work.
> We also need to agree on URL paths that make sense, considering we'll have a 
> new "site" for each major release - something like 
> {{http://lucene.apache.org/solr/ref-guide/6_1}} might work? Other thoughts 
> are welcome on this point also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10295) Decide online location for Ref Guide HTML pages

2017-05-03 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995136#comment-15995136
 ] 

Steve Rowe commented on SOLR-10295:
---

bq. In the process of playing around with the CMS, I made a permanent directory 
content/solr/guide and included an index.html file there with a bit of 
placeholder content and a link to my test: 
https://lucene.apache.org/solr/guide/.

[~ctargett], I see your landing page has placeholders for development branches, 
but once things have settled (i.e. there is a released guide with an HTML 
format) I don't think we should link from the website to development versions 
of the ref guide.

> Decide online location for Ref Guide HTML pages
> ---
>
> Key: SOLR-10295
> URL: https://issues.apache.org/jira/browse/SOLR-10295
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> One of the biggest decisions we need to make is where to put the new Solr Ref 
> Guide. Confluence at least had the whole web-hosting bits figured out; we 
> have to figure that out on our own.
> An obvious (maybe only to me) choice is to integrate the Ref Guide with the 
> Solr Website. However, due to the size of the Solr Ref Guide (nearly 200 
> pages), I believe trying to publish it solely with existing CMS tools will 
> create problems similar to those described in the Lucene ReleaseTodo when it 
> comes to publishing the Lucene/Solr javadocs (see 
> https://wiki.apache.org/lucene-java/ReleaseTodo#Website_.2B-.3D_javadocs).
> A solution exists already, and it's what is done for the javadocs. From the 
> above link:
> {quote}
> The solution: skip committing javadocs to the source tree, then staging, then 
> publishing, and instead commit javadocs directly to the production tree. 
> Ordinarily this would be problematic, because the CMS wants to keep the 
> production tree in sync with the staging tree, so anything it finds in the 
> production tree that's not in the staging tree gets nuked. However, the CMS 
> has a built-in mechanism to allow exceptions to the 
> keep-production-in-sync-with-staging rule: extpaths.txt.
> {quote}
> This solution (for those who don't know already) is to provide a static text 
> file (extpaths.txt) that includes the javadoc paths that should be presented 
> in production, but which won't exist in CMS staging environments. This way, 
> we can publish HTML files directly to production and they will be preserved 
> when the staging-production trees are synced.
> The rest of the process would be quite similar to what is documented in the 
> ReleaseTodo in sections following the link above - use SVN to update the CMS 
> production site and update extpaths.txt properly. We'd do this in the 
> {{solr}} section of the CMS obviously, and not the {{lucene}} section.
> A drawback to this approach is that we won't have a staging area to view the 
> Guide before publication. Files would be generated and go to production 
> directly. We may want to put a process in place to give some additional 
> confidence that things look right first (someone's people.apache.org 
> directory? a pre-pub validation script that tests...something...?), and agree 
> on what we'd be voting on when a vote to release comes up. However, the CMS 
> is pretty much the only option that I can think of...other ideas are welcome 
> if they might work.
> We also need to agree on URL paths that make sense, considering we'll have a 
> new "site" for each major release - something like 
> {{http://lucene.apache.org/solr/ref-guide/6_1}} might work? Other thoughts 
> are welcome on this point also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release planning for 7.0

2017-05-03 Thread Ishan Chattopadhyaya
> With 6.6 in the pipeline, I think sometime in June would be a good time
to cut a release
> branch. What do all of you think?

+1

On Wed, May 3, 2017 at 9:11 PM, Anshum Gupta  wrote:

> Hi,
>
> It's May already, and with 6.6 lined up, I think we should start planning
> on how we want to proceed with 7.0, in terms of both - the timeline, and
> what it would likely contain.
>
> I am not suggesting we start the release process right away, but just
> wanted to start a discussion around the above mentioned lines.
>
> With 6.6 in the pipeline, I think sometime in June would be a good time to
> cut a release branch. What do all of you think?
>
> P.S: This email is about 'discussion' and 'planning', so if someone wants
> to volunteer to be the release manager, please go ahead. I can't remember
> if someone did explicit volunteer to wear this hat for 7.0. If no one
> volunteers, I will take it up.
>
> -Anshum
>


Release planning for 7.0

2017-05-03 Thread Anshum Gupta
Hi,

It's May already, and with 6.6 lined up, I think we should start planning
on how we want to proceed with 7.0, in terms of both - the timeline, and
what it would likely contain.

I am not suggesting we start the release process right away, but just
wanted to start a discussion around the above mentioned lines.

With 6.6 in the pipeline, I think sometime in June would be a good time to
cut a release branch. What do all of you think?

P.S: This email is about 'discussion' and 'planning', so if someone wants
to volunteer to be the release manager, please go ahead. I can't remember
if someone did explicit volunteer to wear this hat for 7.0. If no one
volunteers, I will take it up.

-Anshum


[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995083#comment-15995083
 ] 

Erick Erickson commented on SOLR-9867:
--

Got it, running TestSolrCLIRunExample now.

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995077#comment-15995077
 ] 

Mikhail Khludnev commented on SOLR-9867:


[~erickerickson],
Here we need to test/beast {{TestSolrCLIRunExample}} for sure. We've done with 
{{SolrCloudExampleTest}} at SOLR-10588

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995053#comment-15995053
 ] 

Erick Erickson edited comment on SOLR-9867 at 5/3/17 3:26 PM:
--

got the patch, I'm starting to give it a beast run. I notice that you also 
changed TestSolrCLIRunExample but I'm beasting SolrCloudExampleTest. Let me 
know if I should beast TestSolrCLIRunExample instead.


was (Author: erickerickson):
got the patch, I'm starting to give it a beast run.

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995053#comment-15995053
 ] 

Erick Erickson commented on SOLR-9867:
--

got the patch, I'm starting to give it a beast run.

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+164) - Build # 19544 - Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19544/
Java: 32bit/jdk-9-ea+164 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
Time allowed to handle this request exceeded

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Time allowed to handle this 
request exceeded
at 
__randomizedtesting.SeedInfo.seed([9C3E8A4F695588D7:146AB595C7A9E52F]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:428)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1383)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1134)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:106)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:78)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:56)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994993#comment-15994993
 ] 

Mikhail Khludnev commented on SOLR-9867:


[~mdrob],
bq. perhaps, instead of latch we can assign cores strictly at the end of init. 
What do you think?
Indeed! Thanks for the hint!
bq. Depends on if we want early requests to fail or to wait, no?
I suppose it's fine since we already have this check.
bq. couldn't this take a while? How did you decide 10 seconds?
I hardly remember how exactly I get to it. Now I set it to 30 sec (almost no 
one experience it, since most of the users wait till Solr load cores before 
stop it), internally it makes 0.5 sec polls, so it doesn't really matter. But 
I'm happy to put any number there which makes tests pass. 

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10295) Decide online location for Ref Guide HTML pages

2017-05-03 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994992#comment-15994992
 ] 

Cassandra Targett commented on SOLR-10295:
--

bq. but that would mean all the pages that refer to it as a parent / link to it 
would have to refer to it as {{index}} not {{apache-solr-reference-guide}}

[~hossman]: I checked and there are no content pages that refer to 
{{apache-solr-reference-guide}}. There are only 2 references at all - one in 
the topnav.html include file and another in build.xml.

I had the mental model backwards - I thought each page declared its parent 
which made this change seem slightly more complicated. I forgot that we did it 
the other way, where each parent declares its children. That makes changing to 
use an {{index.html}} a really minor change, and that's what we should do.

> Decide online location for Ref Guide HTML pages
> ---
>
> Key: SOLR-10295
> URL: https://issues.apache.org/jira/browse/SOLR-10295
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> One of the biggest decisions we need to make is where to put the new Solr Ref 
> Guide. Confluence at least had the whole web-hosting bits figured out; we 
> have to figure that out on our own.
> An obvious (maybe only to me) choice is to integrate the Ref Guide with the 
> Solr Website. However, due to the size of the Solr Ref Guide (nearly 200 
> pages), I believe trying to publish it solely with existing CMS tools will 
> create problems similar to those described in the Lucene ReleaseTodo when it 
> comes to publishing the Lucene/Solr javadocs (see 
> https://wiki.apache.org/lucene-java/ReleaseTodo#Website_.2B-.3D_javadocs).
> A solution exists already, and it's what is done for the javadocs. From the 
> above link:
> {quote}
> The solution: skip committing javadocs to the source tree, then staging, then 
> publishing, and instead commit javadocs directly to the production tree. 
> Ordinarily this would be problematic, because the CMS wants to keep the 
> production tree in sync with the staging tree, so anything it finds in the 
> production tree that's not in the staging tree gets nuked. However, the CMS 
> has a built-in mechanism to allow exceptions to the 
> keep-production-in-sync-with-staging rule: extpaths.txt.
> {quote}
> This solution (for those who don't know already) is to provide a static text 
> file (extpaths.txt) that includes the javadoc paths that should be presented 
> in production, but which won't exist in CMS staging environments. This way, 
> we can publish HTML files directly to production and they will be preserved 
> when the staging-production trees are synced.
> The rest of the process would be quite similar to what is documented in the 
> ReleaseTodo in sections following the link above - use SVN to update the CMS 
> production site and update extpaths.txt properly. We'd do this in the 
> {{solr}} section of the CMS obviously, and not the {{lucene}} section.
> A drawback to this approach is that we won't have a staging area to view the 
> Guide before publication. Files would be generated and go to production 
> directly. We may want to put a process in place to give some additional 
> confidence that things look right first (someone's people.apache.org 
> directory? a pre-pub validation script that tests...something...?), and agree 
> on what we'd be voting on when a vote to release comes up. However, the CMS 
> is pretty much the only option that I can think of...other ideas are welcome 
> if they might work.
> We also need to agree on URL paths that make sense, considering we'll have a 
> new "site" for each major release - something like 
> {{http://lucene.apache.org/solr/ref-guide/6_1}} might work? Other thoughts 
> are welcome on this point also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-05-03 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9867:
---
Attachment: SOLR-9867.patch
SOLR-9867-ignore-whitespace.patch

also attaching [^SOLR-9867-ignore-whitespace.patch] to make review easier since 
original [^SOLR-9867.patch] has too many identations. 
h3. Key stuff
* I've removed SDF.init() latch, and just assign volatile {{SDF.cores}} at the 
end of {{init()}} since SDF.doFilter() already has a precondition check 
{{SDF.cores!=null}}. 
* this change introduces new {{/admin/cores?action=STATUS&}} response:
{code}
{ name:foo, isLoaded:false, isLoading:true }
{code}
* {{SolrCLI}} waits for a core to come up 1 min max with 1 sec poll interval.
 
h3. BTW
What about committing it tomorrow? 
h3. Also
All {{TestSolrCLIRunExample}} tests runs just for 47 secs at my laptop. Should 
it still be marked as {{@Slow}}

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Critical
> Fix For: 6.6, master (7.0)
>
> Attachments: SDF init and doFilter in parallel.png, 
> SOLR-9867-ignore-whitespace.patch, SOLR-9867.patch, SOLR-9867.patch, 
> SOLR-9867.patch, SOLR-9867.patch, SOLR-9867.patch, SOLR-9867-test.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10601) StreamExpressionParser should handle white space around = in named parameters

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994969#comment-15994969
 ] 

ASF subversion and git services commented on SOLR-10601:


Commit 17563ce81f26cb2844c60ba26b1627f6f7e8908d in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=17563ce ]

SOLR-10601: StreamExpressionParser should handle white space around = in named 
parameters


> StreamExpressionParser should handle white space around = in named parameters
> -
>
> Key: SOLR-10601
> URL: https://issues.apache.org/jira/browse/SOLR-10601
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10601.patch
>
>
> Currently the StreamExpressionParser doesn't recognize a named property if 
> there is whitespace around the =.
> This is a quick fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7814) DateRangePrefixTree trouble finding years >= 292,000,000

2017-05-03 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7814:
-
Description: Using DateRangePrefixTree (in spatial-extras): When indexing 
years on or after the last million-year period supported by most computer 
systems, +292,000,000, a range query like {{\[1970 TO *\]}} will fail to find 
it.  This happens as a result of some off-by-one errors in DateRangePrefixTree. 
 (was: Using DateRangePrefixTree (in spatial-extras): When indexing years on or 
after the last millennia supported by most computer systems, +292,000,000, a 
range query like {{\[1970 TO *\]}} will fail to find it.  This happens as a 
result of some off-by-one errors in DateRangePrefixTree.)
Summary: DateRangePrefixTree trouble finding years >= 292,000,000  
(was: DateRangePrefixTree trouble finding years in last millennia +292,000,000)

> DateRangePrefixTree trouble finding years >= 292,000,000
> 
>
> Key: LUCENE-7814
> URL: https://issues.apache.org/jira/browse/LUCENE-7814
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE_7814_DateRangePrefixTree_last_millennia_bug.patch
>
>
> Using DateRangePrefixTree (in spatial-extras): When indexing years on or 
> after the last million-year period supported by most computer systems, 
> +292,000,000, a range query like {{\[1970 TO *\]}} will fail to find it.  
> This happens as a result of some off-by-one errors in DateRangePrefixTree.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10549) /schema/fieldtypes doesn't include "large" if configured as a fieldType default

2017-05-03 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-10549.
-
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.6

> /schema/fieldtypes doesn't include "large" if configured as a fieldType 
> default
> ---
>
> Key: SOLR-10549
> URL: https://issues.apache.org/jira/browse/SOLR-10549
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5.1
>Reporter: Hoss Man
>Assignee: David Smiley
> Fix For: 6.6, master (7.0)
>
> Attachments: 
> SOLR_10549___schema_fieldtypes_showDefaults_misses_large.patch
>
>
> spin off of SOLR-10439 since that fix for {{/schema/fields}} is likely to be 
> included in 6.5.1, so the comparable fix for  {{/schema/fieldtypess}} needs 
> to be tracked in it's own jira



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10549) /schema/fieldtypes doesn't include "large" if configured as a fieldType default

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994935#comment-15994935
 ] 

ASF subversion and git services commented on SOLR-10549:


Commit e5b0c1669860e3ed89bd7935b4636d89ca5c05e0 in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e5b0c16 ]

* SOLR-10549: The new 'large' attribute had been forgotten in 
/schema/fieldtypes?showDefaults=true (David Smiley)

(cherry picked from commit 037d864)


> /schema/fieldtypes doesn't include "large" if configured as a fieldType 
> default
> ---
>
> Key: SOLR-10549
> URL: https://issues.apache.org/jira/browse/SOLR-10549
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5.1
>Reporter: Hoss Man
>Assignee: David Smiley
> Attachments: 
> SOLR_10549___schema_fieldtypes_showDefaults_misses_large.patch
>
>
> spin off of SOLR-10439 since that fix for {{/schema/fields}} is likely to be 
> included in 6.5.1, so the comparable fix for  {{/schema/fieldtypess}} needs 
> to be tracked in it's own jira



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10549) /schema/fieldtypes doesn't include "large" if configured as a fieldType default

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994933#comment-15994933
 ] 

ASF subversion and git services commented on SOLR-10549:


Commit b901d16b9b15e227f4a48fe5891326169d29bd5e in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b901d16 ]

* SOLR-10549: (typo fix in CHANGES.txt)


> /schema/fieldtypes doesn't include "large" if configured as a fieldType 
> default
> ---
>
> Key: SOLR-10549
> URL: https://issues.apache.org/jira/browse/SOLR-10549
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5.1
>Reporter: Hoss Man
>Assignee: David Smiley
> Attachments: 
> SOLR_10549___schema_fieldtypes_showDefaults_misses_large.patch
>
>
> spin off of SOLR-10439 since that fix for {{/schema/fields}} is likely to be 
> included in 6.5.1, so the comparable fix for  {{/schema/fieldtypess}} needs 
> to be tracked in it's own jira



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1281 - Still Unstable!

2017-05-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1281/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitShardWithRule

Error Message:
Error from server at http://127.0.0.1:48960/x_k: Could not find collection : 
shardSplitWithRule

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:48960/x_k: Could not find collection : 
shardSplitWithRule
at 
__randomizedtesting.SeedInfo.seed([137F0FFA03E06941:9271DE489E12326A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:451)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:392)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1383)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1134)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:177)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitShardWithRule(ShardSplitTest.java:543)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 339 - Still unstable

2017-05-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/339/

1 tests failed.
FAILED:  
org.apache.solr.store.blockcache.BlockDirectoryTest.testRandomAccessWrites

Error Message:
Direct buffer memory

Stack Trace:
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.solr.store.blockcache.BlockCache.(BlockCache.java:75)
at 
org.apache.solr.store.blockcache.BlockDirectoryTest.setUp(BlockDirectoryTest.java:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:941)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12025 lines...]
   [junit4] Suite: org.apache.solr.store.blockcache.BlockDirectoryTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/solr/build/solr-core/test/J0/temp/solr.store.blockcache.BlockDirectoryTest_AF97F17D8B045715-001/init-core-data-001
   [junit4]   2> 1108536 WARN  

[jira] [Updated] (SOLR-10602) Triggers should be able to restore state from old instances when taking over

2017-05-03 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-10602:
-
Attachment: SOLR-10602.patch

* Added a new restoreState(Trigger) method in Trigger interface
* Added tests for node added and node lost triggers
* ScheduledTriggers calls this new method before replacing scheduling the new 
trigger

This still needs an integration test.

> Triggers should be able to restore state from old instances when taking over
> 
>
> Key: SOLR-10602
> URL: https://issues.apache.org/jira/browse/SOLR-10602
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10602.patch
>
>
> Currently if a user modifies a trigger then the old trigger is closed and 
> unscheduled and replaced with a new trigger instance with updated properties. 
> However, this loses the intermediate state that the trigger may have been 
> tracking. For example, say there is a trigger for NodeAdded event with 
> waitFor=5s and a new node is added to the cluster. While the trigger is 
> waiting for 5s before firing, the user modifies the trigger to change the 
> waitFor=2s. Doing this today will erase the state of the old trigger and the 
> new trigger will never fire for the newly added node.
> We need to be able to restore state from old trigger instance before 
> replacing it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10295) Decide online location for Ref Guide HTML pages

2017-05-03 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994862#comment-15994862
 ] 

Cassandra Targett commented on SOLR-10295:
--

I don't have a ton of experience with RewriteRules (OK, zero), but I think 
something like this would work in .htaccess:

{code}
RewriteRule solr/guide/(.*)/index.html$ /solr/guide/$1/ 
[R=301,NE]
{code}

I believe that would work for any path under {{solr/guide/}} so it won't need 
to be updated for each release or branch we choose to publish.

Anyone know for sure?

> Decide online location for Ref Guide HTML pages
> ---
>
> Key: SOLR-10295
> URL: https://issues.apache.org/jira/browse/SOLR-10295
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> One of the biggest decisions we need to make is where to put the new Solr Ref 
> Guide. Confluence at least had the whole web-hosting bits figured out; we 
> have to figure that out on our own.
> An obvious (maybe only to me) choice is to integrate the Ref Guide with the 
> Solr Website. However, due to the size of the Solr Ref Guide (nearly 200 
> pages), I believe trying to publish it solely with existing CMS tools will 
> create problems similar to those described in the Lucene ReleaseTodo when it 
> comes to publishing the Lucene/Solr javadocs (see 
> https://wiki.apache.org/lucene-java/ReleaseTodo#Website_.2B-.3D_javadocs).
> A solution exists already, and it's what is done for the javadocs. From the 
> above link:
> {quote}
> The solution: skip committing javadocs to the source tree, then staging, then 
> publishing, and instead commit javadocs directly to the production tree. 
> Ordinarily this would be problematic, because the CMS wants to keep the 
> production tree in sync with the staging tree, so anything it finds in the 
> production tree that's not in the staging tree gets nuked. However, the CMS 
> has a built-in mechanism to allow exceptions to the 
> keep-production-in-sync-with-staging rule: extpaths.txt.
> {quote}
> This solution (for those who don't know already) is to provide a static text 
> file (extpaths.txt) that includes the javadoc paths that should be presented 
> in production, but which won't exist in CMS staging environments. This way, 
> we can publish HTML files directly to production and they will be preserved 
> when the staging-production trees are synced.
> The rest of the process would be quite similar to what is documented in the 
> ReleaseTodo in sections following the link above - use SVN to update the CMS 
> production site and update extpaths.txt properly. We'd do this in the 
> {{solr}} section of the CMS obviously, and not the {{lucene}} section.
> A drawback to this approach is that we won't have a staging area to view the 
> Guide before publication. Files would be generated and go to production 
> directly. We may want to put a process in place to give some additional 
> confidence that things look right first (someone's people.apache.org 
> directory? a pre-pub validation script that tests...something...?), and agree 
> on what we'd be voting on when a vote to release comes up. However, the CMS 
> is pretty much the only option that I can think of...other ideas are welcome 
> if they might work.
> We also need to agree on URL paths that make sense, considering we'll have a 
> new "site" for each major release - something like 
> {{http://lucene.apache.org/solr/ref-guide/6_1}} might work? Other thoughts 
> are welcome on this point also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1485) Payload support

2017-05-03 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994858#comment-15994858
 ] 

Erik Hatcher edited comment on SOLR-1485 at 5/3/17 1:33 PM:


At the very least, if you want to do some lowercasing, or other token 
filtering, one could do that either before or after 
DelimitedPayloadTokenFilter, and the current implementation will at least match 
exact'ish phrases and payload score or check them - so it's actually not a bad 
first pass.  (At first I implemented it with simple space splitting, as 
*_dps/*_dpf/*_dpi do, but then added analysis to add more configurability).


was (Author: ehatcher):
At the very least, if you want to do some lowercasing, or other token 
filtering, one could do that either before or after 
DelimitedPayloadTokenFilter, and the current implementation will at least match 
exact'ish phrases and payload score or check them - so it's actually not a bad 
first pass.  (At first I implemented it with simple space splitting, as DPTF 
does, but then added analysis).

> Payload support
> ---
>
> Key: SOLR-1485
> URL: https://issues.apache.org/jira/browse/SOLR-1485
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
> Attachments: PayloadTermQueryPlugin.java, payload_value_source.png, 
> SOLR-1485.patch, SOLR-1485.patch, SOLR-1485.patch, SOLR-1485.patch, 
> SOLR-1485.patch
>
>
> Solr has no support for Lucene's PayloadScoreQuery, yet it has support for 
> indexing payloads (via DelimitedPayloadTokenFilter or 
> NumericPayloadTokenFilter). 
> This issue adds value source and query parser support for leveraging payloads 
> created by those token filters.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1485) Payload support

2017-05-03 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994858#comment-15994858
 ] 

Erik Hatcher edited comment on SOLR-1485 at 5/3/17 1:34 PM:


At the very least, if you want to do some lowercasing, or other token 
filtering, one could do that either before or after 
DelimitedPayloadTokenFilter, and the current implementation will at least match 
exact'ish phrases and payload score or check them - so it's actually not a bad 
first pass.  (At first I implemented it with simple space splitting, as 
\*_dps/\*_dpf/\*_dpi do, but then added analysis to add more configurability).


was (Author: ehatcher):
At the very least, if you want to do some lowercasing, or other token 
filtering, one could do that either before or after 
DelimitedPayloadTokenFilter, and the current implementation will at least match 
exact'ish phrases and payload score or check them - so it's actually not a bad 
first pass.  (At first I implemented it with simple space splitting, as 
*_dps/*_dpf/*_dpi do, but then added analysis to add more configurability).

> Payload support
> ---
>
> Key: SOLR-1485
> URL: https://issues.apache.org/jira/browse/SOLR-1485
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
> Attachments: PayloadTermQueryPlugin.java, payload_value_source.png, 
> SOLR-1485.patch, SOLR-1485.patch, SOLR-1485.patch, SOLR-1485.patch, 
> SOLR-1485.patch
>
>
> Solr has no support for Lucene's PayloadScoreQuery, yet it has support for 
> indexing payloads (via DelimitedPayloadTokenFilter or 
> NumericPayloadTokenFilter). 
> This issue adds value source and query parser support for leveraging payloads 
> created by those token filters.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1485) Payload support

2017-05-03 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994858#comment-15994858
 ] 

Erik Hatcher commented on SOLR-1485:


At the very least, if you want to do some lowercasing, or other token 
filtering, one could do that either before or after 
DelimitedPayloadTokenFilter, and the current implementation will at least match 
exact'ish phrases and payload score or check them - so it's actually not a bad 
first pass.  (At first I implemented it with simple space splitting, as DPTF 
does, but then added analysis).

> Payload support
> ---
>
> Key: SOLR-1485
> URL: https://issues.apache.org/jira/browse/SOLR-1485
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
> Attachments: PayloadTermQueryPlugin.java, payload_value_source.png, 
> SOLR-1485.patch, SOLR-1485.patch, SOLR-1485.patch, SOLR-1485.patch, 
> SOLR-1485.patch
>
>
> Solr has no support for Lucene's PayloadScoreQuery, yet it has support for 
> indexing payloads (via DelimitedPayloadTokenFilter or 
> NumericPayloadTokenFilter). 
> This issue adds value source and query parser support for leveraging payloads 
> created by those token filters.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10549) /schema/fieldtypes doesn't include "large" if configured as a fieldType default

2017-05-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994855#comment-15994855
 ] 

ASF subversion and git services commented on SOLR-10549:


Commit 037d864bf993ddba7af2bbee8f1a87b5e347240a in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=037d864 ]

* SOLR-10549: The new 'large' attribute had been forgotten in 
/schema/fields?showDefaults=true (David Smiley)


> /schema/fieldtypes doesn't include "large" if configured as a fieldType 
> default
> ---
>
> Key: SOLR-10549
> URL: https://issues.apache.org/jira/browse/SOLR-10549
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5.1
>Reporter: Hoss Man
>Assignee: David Smiley
> Attachments: 
> SOLR_10549___schema_fieldtypes_showDefaults_misses_large.patch
>
>
> spin off of SOLR-10439 since that fix for {{/schema/fields}} is likely to be 
> included in 6.5.1, so the comparable fix for  {{/schema/fieldtypess}} needs 
> to be tracked in it's own jira



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10602) Triggers should be able to restore state from old instances when taking over

2017-05-03 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-10602:


 Summary: Triggers should be able to restore state from old 
instances when taking over
 Key: SOLR-10602
 URL: https://issues.apache.org/jira/browse/SOLR-10602
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: master (7.0)


Currently if a user modifies a trigger then the old trigger is closed and 
unscheduled and replaced with a new trigger instance with updated properties. 
However, this loses the intermediate state that the trigger may have been 
tracking. For example, say there is a trigger for NodeAdded event with 
waitFor=5s and a new node is added to the cluster. While the trigger is waiting 
for 5s before firing, the user modifies the trigger to change the waitFor=2s. 
Doing this today will erase the state of the old trigger and the new trigger 
will never fire for the newly added node.

We need to be able to restore state from old trigger instance before replacing 
it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2283) Expose QueryUtils methods

2017-05-03 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-2283.
---
Resolution: Duplicate

> Expose QueryUtils methods
> -
>
> Key: SOLR-2283
> URL: https://issues.apache.org/jira/browse/SOLR-2283
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 4.0-ALPHA
>Reporter: Jayendra Patil
>Priority: Minor
> Attachments: SOLR-2283.patch
>
>
> We have a custom implementation of ExtendedDismaxQParserPlugin, bundled into 
> a jar in the multicore lib.
> The custom ExtendedDismaxQParserPlugin implementation still uses 
> org.apache.solr.search.QueryUtils makeQueryable method, same as the old 
> implementation.
> However, the method calls throws an java.lang.IllegalAccessError, as it is 
> being called from the inner ExtendedSolrQueryParser class and the 
> makeQueryable has no access modifier (basically default).
> Can we have the access modifier to public, as all the methods are static, to 
> be accessible 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10307) Provide SSL/TLS keystore password a more secure way

2017-05-03 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994742#comment-15994742
 ] 

Shalin Shekhar Mangar commented on SOLR-10307:
--

Ah ok, I see the disconnect. I didn't register on the first read that 
SSLConfigurations.init() puts the relevant environment variables into system 
properties for other classes to use. Thanks Mano.

+1 to commit.

> Provide SSL/TLS keystore password a more secure way
> ---
>
> Key: SOLR-10307
> URL: https://issues.apache.org/jira/browse/SOLR-10307
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10307.patch, SOLR-10307.patch, SOLR-10307.patch
>
>
> Currently the only way to pass server and client side SSL keytstore and 
> truststore passwords is to set specific environment variables that will be 
> passed as system properties, through command line parameter.
> First option is to pass passwords through environment variables which gives a 
> better level of protection. Second option would be to use hadoop credential 
> provider interface to access credential store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-7807) Test*RangeFieldQueries.testRandomBig() and .testRandomTiny() failures

2017-05-03 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-7807:
-
Comment: was deleted

(was: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/339/console

bq.[junit4] HEARTBEAT J1 PID(14941@localhost): 2017-05-03T11:57:32, stalled 
for 3069s at: TestGeo3DPoint.testRandomBig
bq.   [junit4] HEARTBEAT J1 PID(14941@localhost): 2017-05-03T11:58:32, stalled 
for 3129s at: TestGeo3DPoint.testRandomBig

seems like it hangs. ops. sorry. it seems like wrong jira. )

> Test*RangeFieldQueries.testRandomBig() and .testRandomTiny() failures
> -
>
> Key: LUCENE-7807
> URL: https://issues.apache.org/jira/browse/LUCENE-7807
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>
> My Jenkins found a failing master seed for {{testRandomBig()}} in all the 
> {{Test\*RangeFieldQueries}} suites:
> {noformat}
> Checking out Revision 2f6101b2ab9e93173a9764450b5f71935495802e 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIntRangeFieldQueries -Dtests.method=testRandomBig 
> -Dtests.seed=90CE64A8F1BFDF78 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=es-VE -Dtests.timezone=US/Hawaii -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE  116s J2  | TestIntRangeFieldQueries.testRandomBig <<<
>[junit4]> Throwable #1: java.lang.AssertionError: wrong hit (first of 
> possibly more):
>[junit4]> FAIL (iter 28): id=973300 should not match but did
>[junit4]>  queryRange=Box(-2147483648 TO 2147483647)
>[junit4]>  box=Box(-1240758905 TO 1326270659)
>[junit4]>  queryType=CONTAINS
>[junit4]>  deleted?=false
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([90CE64A8F1BFDF78:1799192760E6A3F8]:0)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:287)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:158)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:73)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=Lucene50(blocksize=128)}, docValues:{id=DocValuesFormat(name=Memory), 
> intRangeField=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=805, 
> maxMBSortInHeap=6.031593684373835, sim=RandomSimilarity(queryNorm=true): {}, 
> locale=es-VE, timezone=US/Hawaii
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=238346048,total=532152320
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFloatRangeFieldQueries -Dtests.method=testRandomBig 
> -Dtests.seed=90CE64A8F1BFDF78 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=no-NO -Dtests.timezone=America/Lima -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE  117s J9  | TestFloatRangeFieldQueries.testRandomBig <<<
>[junit4]> Throwable #1: java.lang.AssertionError: wrong hit (first of 
> possibly more):
>[junit4]> FAIL (iter 28): id=973300 should not match but did
>[junit4]>  queryRange=Box(-Infinity TO Infinity)
>[junit4]>  box=Box(-7.845437E37 TO 1.6013746E38)
>[junit4]>  queryType=CONTAINS
>[junit4]>  deleted?=false
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([90CE64A8F1BFDF78:1799192760E6A3F8]:0)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:287)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:158)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:73)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=PostingsFormat(name=Asserting)}, 
> docValues:{floatRangeField=DocValuesFormat(name=Memory), 
> id=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=69, 
> maxMBSortInHeap=7.5037451812250175, sim=RandomSimilarity(queryNorm=true): {}, 
> locale=no-NO, timezone=America/Lima
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestDoubleRangeFieldQueries -Dtests.method=testRandomBig 
> -Dtests.seed=90CE64A8F1BFDF78 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> 

[jira] [Comment Edited] (LUCENE-7807) Test*RangeFieldQueries.testRandomBig() and .testRandomTiny() failures

2017-05-03 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994716#comment-15994716
 ] 

Mikhail Khludnev edited comment on LUCENE-7807 at 5/3/17 12:01 PM:
---

https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/339/console

bq.[junit4] HEARTBEAT J1 PID(14941@localhost): 2017-05-03T11:57:32, stalled 
for 3069s at: TestGeo3DPoint.testRandomBig
bq.   [junit4] HEARTBEAT J1 PID(14941@localhost): 2017-05-03T11:58:32, stalled 
for 3129s at: TestGeo3DPoint.testRandomBig

seems like it hangs. ops. sorry. it seems like wrong jira. 


was (Author: mkhludnev):
https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/339/console

bq.[junit4] HEARTBEAT J1 PID(14941@localhost): 2017-05-03T11:57:32, stalled 
for 3069s at: TestGeo3DPoint.testRandomBig
bq.   [junit4] HEARTBEAT J1 PID(14941@localhost): 2017-05-03T11:58:32, stalled 
for 3129s at: TestGeo3DPoint.testRandomBig

seems like it hangs. 

> Test*RangeFieldQueries.testRandomBig() and .testRandomTiny() failures
> -
>
> Key: LUCENE-7807
> URL: https://issues.apache.org/jira/browse/LUCENE-7807
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>
> My Jenkins found a failing master seed for {{testRandomBig()}} in all the 
> {{Test\*RangeFieldQueries}} suites:
> {noformat}
> Checking out Revision 2f6101b2ab9e93173a9764450b5f71935495802e 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIntRangeFieldQueries -Dtests.method=testRandomBig 
> -Dtests.seed=90CE64A8F1BFDF78 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=es-VE -Dtests.timezone=US/Hawaii -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE  116s J2  | TestIntRangeFieldQueries.testRandomBig <<<
>[junit4]> Throwable #1: java.lang.AssertionError: wrong hit (first of 
> possibly more):
>[junit4]> FAIL (iter 28): id=973300 should not match but did
>[junit4]>  queryRange=Box(-2147483648 TO 2147483647)
>[junit4]>  box=Box(-1240758905 TO 1326270659)
>[junit4]>  queryType=CONTAINS
>[junit4]>  deleted?=false
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([90CE64A8F1BFDF78:1799192760E6A3F8]:0)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:287)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:158)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:73)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=Lucene50(blocksize=128)}, docValues:{id=DocValuesFormat(name=Memory), 
> intRangeField=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=805, 
> maxMBSortInHeap=6.031593684373835, sim=RandomSimilarity(queryNorm=true): {}, 
> locale=es-VE, timezone=US/Hawaii
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=238346048,total=532152320
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFloatRangeFieldQueries -Dtests.method=testRandomBig 
> -Dtests.seed=90CE64A8F1BFDF78 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=no-NO -Dtests.timezone=America/Lima -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE  117s J9  | TestFloatRangeFieldQueries.testRandomBig <<<
>[junit4]> Throwable #1: java.lang.AssertionError: wrong hit (first of 
> possibly more):
>[junit4]> FAIL (iter 28): id=973300 should not match but did
>[junit4]>  queryRange=Box(-Infinity TO Infinity)
>[junit4]>  box=Box(-7.845437E37 TO 1.6013746E38)
>[junit4]>  queryType=CONTAINS
>[junit4]>  deleted?=false
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([90CE64A8F1BFDF78:1799192760E6A3F8]:0)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:287)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:158)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:73)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=PostingsFormat(name=Asserting)}, 
> docValues:{floatRangeField=DocValuesFormat(name=Memory), 
> 

[jira] [Commented] (LUCENE-7807) Test*RangeFieldQueries.testRandomBig() and .testRandomTiny() failures

2017-05-03 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994716#comment-15994716
 ] 

Mikhail Khludnev commented on LUCENE-7807:
--

https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/339/console

bq.[junit4] HEARTBEAT J1 PID(14941@localhost): 2017-05-03T11:57:32, stalled 
for 3069s at: TestGeo3DPoint.testRandomBig
bq.   [junit4] HEARTBEAT J1 PID(14941@localhost): 2017-05-03T11:58:32, stalled 
for 3129s at: TestGeo3DPoint.testRandomBig

seems like it hangs. 

> Test*RangeFieldQueries.testRandomBig() and .testRandomTiny() failures
> -
>
> Key: LUCENE-7807
> URL: https://issues.apache.org/jira/browse/LUCENE-7807
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>
> My Jenkins found a failing master seed for {{testRandomBig()}} in all the 
> {{Test\*RangeFieldQueries}} suites:
> {noformat}
> Checking out Revision 2f6101b2ab9e93173a9764450b5f71935495802e 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIntRangeFieldQueries -Dtests.method=testRandomBig 
> -Dtests.seed=90CE64A8F1BFDF78 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=es-VE -Dtests.timezone=US/Hawaii -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE  116s J2  | TestIntRangeFieldQueries.testRandomBig <<<
>[junit4]> Throwable #1: java.lang.AssertionError: wrong hit (first of 
> possibly more):
>[junit4]> FAIL (iter 28): id=973300 should not match but did
>[junit4]>  queryRange=Box(-2147483648 TO 2147483647)
>[junit4]>  box=Box(-1240758905 TO 1326270659)
>[junit4]>  queryType=CONTAINS
>[junit4]>  deleted?=false
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([90CE64A8F1BFDF78:1799192760E6A3F8]:0)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:287)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:158)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:73)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=Lucene50(blocksize=128)}, docValues:{id=DocValuesFormat(name=Memory), 
> intRangeField=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=805, 
> maxMBSortInHeap=6.031593684373835, sim=RandomSimilarity(queryNorm=true): {}, 
> locale=es-VE, timezone=US/Hawaii
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=238346048,total=532152320
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFloatRangeFieldQueries -Dtests.method=testRandomBig 
> -Dtests.seed=90CE64A8F1BFDF78 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=no-NO -Dtests.timezone=America/Lima -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE  117s J9  | TestFloatRangeFieldQueries.testRandomBig <<<
>[junit4]> Throwable #1: java.lang.AssertionError: wrong hit (first of 
> possibly more):
>[junit4]> FAIL (iter 28): id=973300 should not match but did
>[junit4]>  queryRange=Box(-Infinity TO Infinity)
>[junit4]>  box=Box(-7.845437E37 TO 1.6013746E38)
>[junit4]>  queryType=CONTAINS
>[junit4]>  deleted?=false
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([90CE64A8F1BFDF78:1799192760E6A3F8]:0)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:287)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:158)
>[junit4]>  at 
> org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:73)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=PostingsFormat(name=Asserting)}, 
> docValues:{floatRangeField=DocValuesFormat(name=Memory), 
> id=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=69, 
> maxMBSortInHeap=7.5037451812250175, sim=RandomSimilarity(queryNorm=true): {}, 
> locale=no-NO, timezone=America/Lima
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestDoubleRangeFieldQueries -Dtests.method=testRandomBig 
> -Dtests.seed=90CE64A8F1BFDF78 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> 

  1   2   >