[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_76) - Build # 11943 - Failure!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11943/
Java: 32bit/jdk1.7.0_76 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([F23D715F0CB90200:7A694E85A2456FF8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:222)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b06) - Build # 12109 - Still Failing!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12109/
Java: 64bit/jdk1.8.0_60-ea-b06 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:48804/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard: 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:48804/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard: 
at 
__randomizedtesting.SeedInfo.seed([F8A576488306250A:70F149922DFA48F2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:556)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdImplicitRouter(FullSolrCloudDistribCmdsTest.java:225)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40) - Build # 12112 - Failure!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12112/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerStatusTest.test

Error Message:
Error from server at http://127.0.0.1:42291: reload the collection time out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42291: reload the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([4C6FBFB91BB0CA32:C43B8063B54CA7CA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:556)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.invokeCollectionApi(AbstractFullDistribZkTestBase.java:1864)
at 
org.apache.solr.cloud.OverseerStatusTest.test(OverseerStatusTest.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)

[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)

2015-03-27 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383624#comment-14383624
 ] 

Per Steffensen commented on SOLR-7236:
--

I am trying to understand why Solr wants to move away from being a web-app 
running in a servlet-container. A servlet-container can deal with all of the 
things you mention in a standardized way

bq. User management (flat file, AD/LDAP etc)

That should really not be a Solr-concern. If you want your users/credentials in 
LDAP manage it through an LDAP management-interface, if you want them in flat 
files, let the admin change the files, etc

bq. Authentication (Admin UI, Admin and data/query operations. Tons of auth 
protocols: basic, digest, oauth, pki..)

Implement your own realm authenticating against whatever credentials-store you 
want with whatever protocol you want - and plug it into your servlet-container. 
Or use a 3party realm that authenticates against your preferred 
credentials-store and supporting your auth protocol. It is easy to find realms 
for Jetty (or Tomcat or ...) authenticating against flat files, LDAP or 
whatever. We wrote our own realm, authenticating against credentials-store in 
ZK - easy.

bq. Authorization (who can do what with what API, collection, doc)

Let your realm list roles that an authenticated user is allowed to play. Setup 
web.xml mapping roles to the url-paths they are allowed use. If limited 
flexibility (url-mapping with stupid limitations) of web.xml authentication is 
not good enough for you, implement it as a Filter yourself, or use some of the 
Filters already out there. E.g. I provided a reg-exp authorization filter with 
SOLR-4470 - it is very powerful.

bq. Pluggability (no user's needs are equal)

It is all pluggable - realms, web.xml, filters etc

 Securing Solr (umbrella issue)
 --

 Key: SOLR-7236
 URL: https://issues.apache.org/jira/browse/SOLR-7236
 Project: Solr
  Issue Type: New Feature
Reporter: Jan Høydahl
  Labels: Security

 This is an umbrella issue for adding security to Solr. The discussion here 
 should discuss real user needs and high-level strategy, before deciding on 
 implementation details. All work will be done in sub tasks and linked issues.
 Solr has not traditionally concerned itself with security. And It has been a 
 general view among the committers that it may be better to stay out of it to 
 avoid blood on our hands in this mine-field. Still, Solr has lately seen 
 SSL support, securing of ZK, and signing of jars, and discussions have begun 
 about securing operations in Solr.
 Some of the topics to address are
 * User management (flat file, AD/LDAP etc)
 * Authentication (Admin UI, Admin and data/query operations. Tons of auth 
 protocols: basic, digest, oauth, pki..)
 * Authorization (who can do what with what API, collection, doc)
 * Pluggability (no user's needs are equal)
 * And we could go on and on but this is what we've seen the most demand for



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_76) - Build # 4484 - Still Failing!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4484/
Java: 32bit/jdk1.7.0_76 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=5474, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=5474, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:55293: collection already exists: 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([BC43FD3E7C484D40]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:556)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:370)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1067)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1695)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1716)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:885)




Build Log:
[...truncated 10033 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.CollectionsAPIDistributedZkTest
 BC43FD3E7C484D40-001\init-core-data-001
   [junit4]   2 1393961 T4989 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 1393961 T4989 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 1393966 T4989 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 1393974 T4990 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 1394080 T4989 oasc.ZkTestServer.run start zk server on 
port:55237
   [junit4]   2 1394080 T4989 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1394082 T4989 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1394087 T4997 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@190f4f2 name:ZooKeeperConnection 
Watcher:127.0.0.1:55237 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 1394087 T4989 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1394087 T4989 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1394088 T4989 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 1394098 T4989 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1394099 T4989 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1394104 T5000 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@138bb45 name:ZooKeeperConnection 
Watcher:127.0.0.1:55237/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2 1394104 T4989 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1394105 T4989 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1394107 T4989 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 1394126 T4989 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 1394129 T4989 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 1394131 T4989 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 1394142 T4989 oasc.AbstractZkTestCase.putConfig put 

[jira] [Assigned] (SOLR-7318) AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows

2015-03-27 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera reassigned SOLR-7318:


Assignee: Shai Erera

 AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows
 -

 Key: SOLR-7318
 URL: https://issues.apache.org/jira/browse/SOLR-7318
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: SOLR-7318.patch


 From here: http://markmail.org/message/662jtoxaztbsl3sp. The problem is that 
 if the working directory and temp directory are on different roots (e.g. C: 
 and D\:), relativizing the path cannot happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7318) AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows

2015-03-27 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-7318:
-
Description: From here: http://markmail.org/message/662jtoxaztbsl3sp. The 
problem is that if the working directory and temp directory are on different 
roots (e.g. C: and D\:), relativizing the path cannot happen.  (was: From here: 
http://markmail.org/message/662jtoxaztbsl3sp. The problem is that if the 
working directory and temp directory are on different roots (e.g. C: and D:), 
relativizing the path cannot happen.)

 AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows
 -

 Key: SOLR-7318
 URL: https://issues.apache.org/jira/browse/SOLR-7318
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
 Attachments: SOLR-7318.patch


 From here: http://markmail.org/message/662jtoxaztbsl3sp. The problem is that 
 if the working directory and temp directory are on different roots (e.g. C: 
 and D\:), relativizing the path cannot happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7318) AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows

2015-03-27 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-7318:
-
Attachment: SOLR-7318.patch

Patch fixes the bug, tests pass (on Windows).

 AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows
 -

 Key: SOLR-7318
 URL: https://issues.apache.org/jira/browse/SOLR-7318
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
 Attachments: SOLR-7318.patch


 From here: http://markmail.org/message/662jtoxaztbsl3sp. The problem is that 
 if the working directory and temp directory are on different roots (e.g. C: 
 and D:), relativizing the path cannot happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7318) AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383689#comment-14383689
 ] 

ASF subversion and git services commented on SOLR-7318:
---

Commit 1669542 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1669542 ]

SOLR-7318: AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work 
on Windows

 AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows
 -

 Key: SOLR-7318
 URL: https://issues.apache.org/jira/browse/SOLR-7318
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: SOLR-7318.patch


 From here: http://markmail.org/message/662jtoxaztbsl3sp. The problem is that 
 if the working directory and temp directory are on different roots (e.g. C: 
 and D\:), relativizing the path cannot happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7226) Make /query/* jmx/* , requestDispatcher/*, listener initParams properties in solrconfig.xml editable

2015-03-27 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383685#comment-14383685
 ] 

Shai Erera commented on SOLR-7226:
--

Before this commit, these lines were commented out:

{noformat}
//List l = (List) ConfigOverlay.getObjectByPath(map,false, 
Arrays.asList(config, initParams));
//assertNotNull(no object /config/initParams : + 
TestBlobHandler.getAsString(map) , l);
//assertEquals( 1, l.size());
//assertEquals( val, ((Map)l.get(0)).get(key) );
{noformat}

The commit only added these

{noformat}
List l = (List) ConfigOverlay.getObjectByPath(map,false, 
Arrays.asList(config, initParams));
assertEquals( 1, l.size());
assertEquals( val, ((Map)l.get(0)).get(key) );
{noformat}

I added the {{assertNotNull}}, and it prints this:

{noformat}
java.lang.AssertionError: no object /config/initParams : {
  responseHeader:{
status:0,
QTime:0},
  config:{
znodeVersion:0,
luceneMatchVersion:org.apache.lucene.util.Version:6.0.0,
updateHandler:{
  class:solr.DirectUpdateHandler2,
  autoCommmitMaxDocs:-1,
  indexWriterCloseWaitsForMerges:true,
  openSearcher:true,
  commitIntervalLowerBound:-1,
  commitWithinSoftCommit:true,
  autoCommit:{
maxDocs:-1,
maxTime:-1,
commitIntervalLowerBound:-1},
  autoSoftCommit:{
maxDocs:-1,
maxTime:-1}},
query:{
  useFilterForSortedQuery:false,
  queryResultWindowSize:1,
  queryResultMaxDocsCached:2147483647,
  enableLazyFieldLoading:false,
  maxBooleanClauses:1024,
  :{
size:1,
showItems:-1,
initialSize:10,
name:fieldValueCache}},
jmx:{
  agentId:null,
  serviceUrl:null,
  rootName:null},
requestHandler:{
  standard:{
name:standard,
class:solr.StandardRequestHandler},
  /admin/file:{
name:/admin/file,
class:solr.admin.ShowFileRequestHandler,
invariants:{hidden:bogus.txt}},
  /update:{
name:/update,
class:org.apache.solr.handler.UpdateRequestHandler,
defaults:{}},
  /update/json:{
name:/update/json,
class:org.apache.solr.handler.UpdateRequestHandler,
defaults:{update.contentType:application/json}},
  /update/csv:{
name:/update/csv,
class:org.apache.solr.handler.UpdateRequestHandler,
defaults:{update.contentType:application/csv}},
  /update/json/docs:{
name:/update/json/docs,
class:org.apache.solr.handler.UpdateRequestHandler,
defaults:{
  update.contentType:application/json,
  json.command:false}},
  /config:{
name:/config,
class:org.apache.solr.handler.SolrConfigHandler,
defaults:{}},
  /schema:{
name:/schema,
class:org.apache.solr.handler.SchemaHandler,
defaults:{}},
  /replication:{
name:/replication,
class:org.apache.solr.handler.ReplicationHandler,
defaults:{}},
  /get:{
name:/get,
class:org.apache.solr.handler.RealTimeGetHandler,
defaults:{
  omitHeader:true,
  wt:json,
  indent:true}},
  /admin/luke:{
name:/admin/luke,
class:org.apache.solr.handler.admin.LukeRequestHandler,
defaults:{}},
  /admin/system:{
name:/admin/system,
class:org.apache.solr.handler.admin.SystemInfoHandler,
defaults:{}},
  /admin/mbeans:{
name:/admin/mbeans,
class:org.apache.solr.handler.admin.SolrInfoMBeanHandler,
defaults:{}},
  /admin/plugins:{
name:/admin/plugins,
class:org.apache.solr.handler.admin.PluginInfoHandler,
defaults:{}},
  /admin/threads:{
name:/admin/threads,
class:org.apache.solr.handler.admin.ThreadDumpHandler,
defaults:{}},
  /admin/properties:{
name:/admin/properties,
class:org.apache.solr.handler.admin.PropertiesRequestHandler,
defaults:{}},
  /admin/logging:{
name:/admin/logging,
class:org.apache.solr.handler.admin.LoggingHandler,
defaults:{}},
  /admin/ping:{
name:/admin/ping,
class:org.apache.solr.handler.PingRequestHandler,
defaults:{},
invariants:{
  echoParams:all,
  q:solrpingquery}},
  /admin/segments:{
name:/admin/segments,
class:org.apache.solr.handler.admin.SegmentsInfoRequestHandler,
defaults:{}}},
directoryFactory:{
  name:DirectoryFactory,
  class:org.apache.solr.core.MockDirectoryFactory,
  solr.hdfs.blockcache.enabled:true,
  solr.hdfs.blockcache.blocksperbank:1024,
  solr.hdfs.home:,
  solr.hdfs.confdir:,
  solr.hdfs.blockcache.global:false},
updateRequestProcessorChain:[
  {
name:nodistrib,
:[
  

SolrJ 5.x compatibility with Solr 4.x

2015-03-27 Thread Karl Wright
Hi,

Now that Solr 5.0.0 has been released, the ManifoldCF project needs to
figure out the appropriate time to upgrade to a newer SolrJ.  Can anyone
point me at the backwards-compatibility policy for SolrJ 5.0.0?  Will it
work properly with Solr 4.x?

Thanks in advance,
Karl


[jira] [Resolved] (LUCENE-6367) Can PrefixQuery subclass AutomatonQuery?

2015-03-27 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6367.

Resolution: Fixed

 Can PrefixQuery subclass AutomatonQuery?
 

 Key: LUCENE-6367
 URL: https://issues.apache.org/jira/browse/LUCENE-6367
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6367.patch, LUCENE-6367.patch


 Spinoff/blocker for LUCENE-5879.
 It seems like PrefixQuery should simply be an AutomatonQuery rather than 
 specializing its own TermsEnum ... with maybe some performance improvements 
 to ByteRunAutomaton.run to short-circuit once it's in a sink state, 
 AutomatonTermsEnum could be just as fast as PrefixTermsEnum.
 If we can do this it will make LUCENE-5879 simpler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6816) Review SolrCloud Indexing Performance.

2015-03-27 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383556#comment-14383556
 ] 

Per Steffensen commented on SOLR-6816:
--

Just to clarify. We did our own implementation of bloom-filter, and did not 
build on the existing feature, because we did not find it satisfactory. I do 
not remember exactly why, but maybe it was because the current solution builds 
a bloom-filter on each segment, and bloom-filters are not really mergeable 
unless they have the same size and used the same hashing-algorithm (and number 
of hashes performed). To be efficient you want your bloom-filter to be bigger 
(and potentially do another number of hashes) the bigger your segment is. We 
decided to do a one-bloom-filter-per-index/core/replica (or whatever you like 
to call it) solution, so that you do not need to merge bloom-filters. Besides 
that you can tell Solr about the maximum expected amount of documents in the 
index and the FPP you want at that number of documents, and it will calculate 
the size of the bloom-filter to use. Because such a one-bloom-filter-per-index 
does not go very well hand-in-hand with Lucenes segmenting strategy, we 
implemented it as a Solr-feature, that is, it is Solr that maintains the 
bloom-filter and the feature is not available for people who just uses Lucene.
It is not a simple task to do it right, so if the lower-hanging fruits are 
satisfactory, you should pick them first.
To recap, we have recently-updated-or-lookedup cache to be able to efficiently 
find the version of a document that does actually exist (and was touched 
recently), and bloom-filter to be able to efficiently find out if a document 
does not exist.

 Review SolrCloud Indexing Performance.
 --

 Key: SOLR-6816
 URL: https://issues.apache.org/jira/browse/SOLR-6816
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Priority: Critical
 Attachments: SolrBench.pdf


 We have never really focused on indexing performance, just correctness and 
 low hanging fruit. We need to vet the performance and try to address any 
 holes.
 Note: A common report is that adding any replication is very slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2842 - Still Failing

2015-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2842/

4 tests failed.
REGRESSION:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:51164//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:51164//collection1
at 
__randomizedtesting.SeedInfo.seed([A2D7B7BFA916A33E:2A83886507EACEC6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:566)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:556)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:604)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:565)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4601 - Still Failing!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4601/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosData

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([FA0B7A3B20A30B5A:86A25860964BBCC5]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:794)
at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosData(SegmentsInfoRequestHandlerTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=5=//lst[@name='segments']/lst[1]/int[@name='size']
xml response was: ?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint 
name=QTime1/int/lstlst name=segmentslst name=_0str 
name=name_0/strint 

Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b54) - Build # 11925 - Still Failing!

2015-03-27 Thread Michael McCandless
This is an odd failure: it looks like the LineFileDocs was closed
while threads were still running, but I can see in the test that we
clearly .join() to the threads...

Mike McCandless

http://blog.mikemccandless.com


On Wed, Mar 25, 2015 at 9:12 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11925/
 Java: 64bit/jdk1.9.0-ea-b54 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:  org.apache.lucene.index.TestNRTThreads.testNRTThreads

 Error Message:
 Captured an uncaught exception in thread: Thread[id=1036, name=Thread-865, 
 state=RUNNABLE, group=TGRP-TestNRTThreads]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=1036, name=Thread-865, 
 state=RUNNABLE, group=TGRP-TestNRTThreads]
 Caused by: java.lang.RuntimeException: java.lang.NullPointerException
 at __randomizedtesting.SeedInfo.seed([3274B57CC2BBC4CD]:0)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$1.run(ThreadedIndexingAndSearchingTestCase.java:302)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
 at org.apache.lucene.util.LineFileDocs.nextDoc(LineFileDocs.java:224)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$1.run(ThreadedIndexingAndSearchingTestCase.java:151)




 Build Log:
 [...truncated 836 lines...]
[junit4] Suite: org.apache.lucene.index.TestNRTThreads
[junit4]   1 Thread-865: hit exc
[junit4]   1 Thread-864: hit exc
[junit4]   1 Thread-863: hit exc
[junit4]   2 java.lang.NullPointerException
[junit4]   2at 
 org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
[junit4]   2at 
 org.apache.lucene.util.LineFileDocs.nextDoc(LineFileDocs.java:224)
[junit4]   2at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$1.run(ThreadedIndexingAndSearchingTestCase.java:151)
[junit4]   2 java.lang.NullPointerException
[junit4]   2at 
 org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
[junit4]   2at 
 org.apache.lucene.util.LineFileDocs.nextDoc(LineFileDocs.java:224)
[junit4]   2at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$1.run(ThreadedIndexingAndSearchingTestCase.java:151)
[junit4]   2 java.lang.NullPointerException
[junit4]   2at 
 org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
[junit4]   2at 
 org.apache.lucene.util.LineFileDocs.nextDoc(LineFileDocs.java:224)
[junit4]   2at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$1.run(ThreadedIndexingAndSearchingTestCase.java:151)
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestNRTThreads 
 -Dtests.method=testNRTThreads -Dtests.seed=3274B57CC2BBC4CD 
 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=fr_FR 
 -Dtests.timezone=America/Costa_Rica -Dtests.asserts=true 
 -Dtests.file.encoding=UTF-8
[junit4] ERROR   14.9s J1 | TestNRTThreads.testNRTThreads 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runTest(ThreadedIndexingAndSearchingTestCase.java:551)
[junit4]at 
 org.apache.lucene.index.TestNRTThreads.testNRTThreads(TestNRTThreads.java:149)
[junit4]at java.lang.Thread.run(Thread.java:745)Throwable #2: 
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=1036, name=Thread-865, 
 state=RUNNABLE, group=TGRP-TestNRTThreads]
[junit4] Caused by: java.lang.RuntimeException: 
 java.lang.NullPointerException
[junit4]at 
 __randomizedtesting.SeedInfo.seed([3274B57CC2BBC4CD]:0)
[junit4]at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$1.run(ThreadedIndexingAndSearchingTestCase.java:302)
[junit4] Caused by: java.lang.NullPointerException
[junit4]at 
 org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
[junit4]at 
 org.apache.lucene.util.LineFileDocs.nextDoc(LineFileDocs.java:224)
[junit4]at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$1.run(ThreadedIndexingAndSearchingTestCase.java:151)Throwable
  #3: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=1034, name=Thread-863, 
 state=RUNNABLE, group=TGRP-TestNRTThreads]
[junit4] Caused by: java.lang.RuntimeException: 
 java.lang.NullPointerException
[junit4]at 
 __randomizedtesting.SeedInfo.seed([3274B57CC2BBC4CD]:0)
[junit4]at 
 

[jira] [Commented] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383521#comment-14383521
 ] 

Michael McCandless commented on LUCENE-6308:


+1 to the latest patch: it's a great step forward for spans.  We can
iterate more in the follow-on issues... I think we should commit what
we have here now?

The patch just needs a minor fix to NearSpans' TwoPhaseIterator now
that the approximation is passed to the ctor.

It's wonderful that Spans now extends DISI, just adding the position
iteration API, and I like adopting the same -1 / Integer.MAX_VALUE
sentinels we use for docs iteration.


 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2056 - Failure!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2056/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x3_commits come up within 
3 ms! ClusterState: {   collection1:{ maxShardsPerNode:1, 
replicationFactor:1, shards:{shard1:{ 
range:8000-7fff, state:active, replicas:{ 
  core_node1:{ state:active, 
node_name:127.0.0.1:53633_lv%2Fp, core:collection1, 
base_url:http://127.0.0.1:53633/lv/p;, leader:true}, 
  core_node2:{ state:active, 
node_name:127.0.0.1:53637_lv%2Fp, core:collection1, 
base_url:http://127.0.0.1:53637/lv/p},   core_node3:{ 
state:active, node_name:127.0.0.1:53643_lv%2Fp, 
core:collection1, 
base_url:http://127.0.0.1:53643/lv/p},   core_node4:{ 
state:active, node_name:127.0.0.1:53649_lv%2Fp, 
core:collection1, base_url:http://127.0.0.1:53649/lv/p, 
autoAddReplicas:false, router:{name:compositeId}, 
autoCreated:true},   control_collection:{ maxShardsPerNode:1, 
replicationFactor:1, shards:{shard1:{ 
range:8000-7fff, state:active, 
replicas:{core_node1:{ state:active, 
node_name:127.0.0.1:53627_lv%2Fp, core:collection1, 
base_url:http://127.0.0.1:53627/lv/p;, leader:true,  
   autoAddReplicas:false, router:{name:compositeId}, 
autoCreated:true},   c8n_1x3_commits:{ maxShardsPerNode:1, 
replicationFactor:3, shards:{shard1:{ 
range:8000-7fff, state:active, replicas:{ 
  core_node1:{ state:recovering, 
node_name:127.0.0.1:53649_lv%2Fp, 
core:c8n_1x3_commits_shard1_replica2, 
base_url:http://127.0.0.1:53649/lv/p},   core_node2:{ 
state:active, node_name:127.0.0.1:53643_lv%2Fp, 
core:c8n_1x3_commits_shard1_replica1, 
base_url:http://127.0.0.1:53643/lv/p;, leader:true}, 
  core_node3:{ state:active, 
node_name:127.0.0.1:53637_lv%2Fp, 
core:c8n_1x3_commits_shard1_replica3, 
base_url:http://127.0.0.1:53637/lv/p, autoAddReplicas:false,
 router:{name:compositeId}}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
c8n_1x3_commits come up within 3 ms! ClusterState: {
  collection1:{
maxShardsPerNode:1,
replicationFactor:1,
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
state:active,
node_name:127.0.0.1:53633_lv%2Fp,
core:collection1,
base_url:http://127.0.0.1:53633/lv/p;,
leader:true},
  core_node2:{
state:active,
node_name:127.0.0.1:53637_lv%2Fp,
core:collection1,
base_url:http://127.0.0.1:53637/lv/p},
  core_node3:{
state:active,
node_name:127.0.0.1:53643_lv%2Fp,
core:collection1,
base_url:http://127.0.0.1:53643/lv/p},
  core_node4:{
state:active,
node_name:127.0.0.1:53649_lv%2Fp,
core:collection1,
base_url:http://127.0.0.1:53649/lv/p,
autoAddReplicas:false,
router:{name:compositeId},
autoCreated:true},
  control_collection:{
maxShardsPerNode:1,
replicationFactor:1,
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{core_node1:{
state:active,
node_name:127.0.0.1:53627_lv%2Fp,
core:collection1,
base_url:http://127.0.0.1:53627/lv/p;,
leader:true,
autoAddReplicas:false,
router:{name:compositeId},
autoCreated:true},
  c8n_1x3_commits:{
maxShardsPerNode:1,
replicationFactor:3,
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
state:recovering,
node_name:127.0.0.1:53649_lv%2Fp,
core:c8n_1x3_commits_shard1_replica2,
base_url:http://127.0.0.1:53649/lv/p},
  core_node2:{
state:active,
node_name:127.0.0.1:53643_lv%2Fp,
core:c8n_1x3_commits_shard1_replica1,
base_url:http://127.0.0.1:53643/lv/p;,
leader:true},
  core_node3:{
state:active,
node_name:127.0.0.1:53637_lv%2Fp,
core:c8n_1x3_commits_shard1_replica3,
base_url:http://127.0.0.1:53637/lv/p,

[jira] [Commented] (LUCENE-6367) Can PrefixQuery subclass AutomatonQuery?

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383512#comment-14383512
 ] 

ASF subversion and git services commented on LUCENE-6367:
-

Commit 1669522 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1669522 ]

LUCENE-6367: PrefixQuery now subclasses AutomatonQuery

 Can PrefixQuery subclass AutomatonQuery?
 

 Key: LUCENE-6367
 URL: https://issues.apache.org/jira/browse/LUCENE-6367
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6367.patch, LUCENE-6367.patch


 Spinoff/blocker for LUCENE-5879.
 It seems like PrefixQuery should simply be an AutomatonQuery rather than 
 specializing its own TermsEnum ... with maybe some performance improvements 
 to ByteRunAutomaton.run to short-circuit once it's in a sink state, 
 AutomatonTermsEnum could be just as fast as PrefixTermsEnum.
 If we can do this it will make LUCENE-5879 simpler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6367) Can PrefixQuery subclass AutomatonQuery?

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383528#comment-14383528
 ] 

ASF subversion and git services commented on LUCENE-6367:
-

Commit 1669526 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1669526 ]

LUCENE-6367: PrefixQuery now subclasses AutomatonQuery

 Can PrefixQuery subclass AutomatonQuery?
 

 Key: LUCENE-6367
 URL: https://issues.apache.org/jira/browse/LUCENE-6367
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6367.patch, LUCENE-6367.patch


 Spinoff/blocker for LUCENE-5879.
 It seems like PrefixQuery should simply be an AutomatonQuery rather than 
 specializing its own TermsEnum ... with maybe some performance improvements 
 to ByteRunAutomaton.run to short-circuit once it's in a sink state, 
 AutomatonTermsEnum could be just as fast as PrefixTermsEnum.
 If we can do this it will make LUCENE-5879 simpler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6816) Review SolrCloud Indexing Performance.

2015-03-27 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383567#comment-14383567
 ] 

Per Steffensen commented on SOLR-6816:
--

bq. if a task fails, then Hadoop usually re-tries that task a couple of times, 
meaning all docs in the block that failed will be sent again

We do not send all documents again if just a few in a batch (bulk) fails. Lets 
say you send a batch of 1000 docs for indexing and only 2 fails due to e.g. 
version-control, we only do another round on those 2 documents - SOLR-3382

 Review SolrCloud Indexing Performance.
 --

 Key: SOLR-6816
 URL: https://issues.apache.org/jira/browse/SOLR-6816
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Priority: Critical
 Attachments: SolrBench.pdf


 We have never really focused on indexing performance, just correctness and 
 low hanging fruit. We need to vet the performance and try to address any 
 holes.
 Note: A common report is that adding any replication is very slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383772#comment-14383772
 ] 

Michael McCandless commented on LUCENE-6308:


bq. What if you have a term that's indexed at that position? 

You cannot: it's a reserved value.  Just like no doc id can be 
Integer.MAX_VALUE.

And if you have 2.1B tokens in one document you are going to have other 
problems...

We should fix indexing to detect if this happens and reject the document, but 
that should be a separate issue and I don't think it should block this one...

 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7318) AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383743#comment-14383743
 ] 

ASF subversion and git services commented on SOLR-7318:
---

Commit 1669551 from [~shaie] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1669551 ]

SOLR-7318: AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work 
on Windows

 AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows
 -

 Key: SOLR-7318
 URL: https://issues.apache.org/jira/browse/SOLR-7318
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: SOLR-7318.patch


 From here: http://markmail.org/message/662jtoxaztbsl3sp. The problem is that 
 if the working directory and temp directory are on different roots (e.g. C: 
 and D\:), relativizing the path cannot happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7134) Replication can still cause index corruption.

2015-03-27 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383719#comment-14383719
 ] 

Ramkumar Aiyengar commented on SOLR-7134:
-

Mark, do you plan to merge this to branch_5x soon? I wanted to merge my commit 
for SOLR-7291, which I still can, but will end up causing a few conflicts for 
you when you merge -- hence wanted to check..

 Replication can still cause index corruption.
 -

 Key: SOLR-7134
 URL: https://issues.apache.org/jira/browse/SOLR-7134
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: Trunk, 5.1

 Attachments: SOLR-7134.patch, SOLR-7134.patch, SOLR-7134.patch, 
 SOLR-7134.patch, SOLR-7134.patch


 While we have plugged most of these holes, there appears to be another that 
 is fairly rare.
 I've seen it play out a couple ways in tests, but it looks like part of the 
 problem is that even if we decide we need a file and download it, we don't 
 care if we then cannot move it into place if it already exists.
 I'm working with a fix that does two things:
 * Fail a replication attempt if we cannot move a file into place because it 
 already exists.
 * If a replication attempt during recovery fails, on the next attempt force a 
 full replication to a new directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383736#comment-14383736
 ] 

Alan Woodward commented on LUCENE-6308:
---

I don't think Integer.MAX_VALUE works though?  What if you have a term that's 
indexed at that position?  SpanTermQuery won't find it.  I think we have to use 
a boolean here (or a different value like -2, but that makes comparisons 
between positions more complicated).

 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7318) AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows

2015-03-27 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved SOLR-7318.
--
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

Committed to trunk and 5x.

 AbstractFullDistribZkTestBase.getRelativeSolrHomePath doesn't work on Windows
 -

 Key: SOLR-7318
 URL: https://issues.apache.org/jira/browse/SOLR-7318
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: Trunk, 5.1

 Attachments: SOLR-7318.patch


 From here: http://markmail.org/message/662jtoxaztbsl3sp. The problem is that 
 if the working directory and temp directory are on different roots (e.g. C: 
 and D\:), relativizing the path cannot happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383749#comment-14383749
 ] 

ASF subversion and git services commented on SOLR-7082:
---

Commit 1669554 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1669554 ]

SOLR-7082: Editing Javadoc

 Streaming Aggregation for SolrCloud
 ---

 Key: SOLR-7082
 URL: https://issues.apache.org/jira/browse/SOLR-7082
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Joel Bernstein
 Fix For: Trunk, 5.1

 Attachments: SOLR-7082.patch, SOLR-7082.patch, SOLR-7082.patch, 
 SOLR-7082.patch, SOLR-7082.patch


 This issue provides a general purpose streaming aggregation framework for 
 SolrCloud. An overview of how it works can be found at this link:
 http://heliosearch.org/streaming-aggregation-for-solrcloud/
 This functionality allows SolrCloud users to perform operations that we're 
 typically done using map/reduce or a parallel computing platform.
 Here is a brief explanation of how the framework works:
 There is a new Solrj *io* package found in: *org.apache.solr.client.solrj.io*
 Key classes:
 *Tuple*: Abstracts a document in a search result as a Map of key/value pairs.
 *TupleStream*: is the base class for all of the streams. Abstracts search 
 results as a stream of Tuples.
 *SolrStream*: connects to a single Solr instance. You call the read() method 
 to iterate over the Tuples.
 *CloudSolrStream*: connects to a SolrCloud collection and merges the results 
 based on the sort param. The merge takes place in CloudSolrStream itself.
 *Decorator Streams*: wrap other streams to perform operations the streams. 
 Some examples are the UniqueStream, MergeStream and ReducerStream.
 *Going parallel with the ParallelStream and  Worker Collections*
 The io package also contains the *ParallelStream*, which wraps a TupleStream 
 and sends it to N worker nodes. The workers are chosen from a SolrCloud 
 collection. These Worker Collections don't have to hold any data, they can 
 just be used to execute TupleStreams.
 *The StreamHandler*
 The Worker nodes have a new RequestHandler called the *StreamHandler*. The 
 ParallelStream serializes a TupleStream, before it is opened, and sends it to 
 the StreamHandler on the Worker Nodes.
 The StreamHandler on each Worker node deserializes the TupleStream, opens the 
 stream, iterates the tuples and streams them back to the ParallelStream. The 
 ParallelStream performs the final merge of Metrics and can be wrapped by 
 other Streams to handled the final merged TupleStream.
 *Sorting and Partitioning search results (Shuffling)*
 Each Worker node is shuffled 1/N of the document results. There is a 
 partitionKeys parameter that can be included with each TupleStream to 
 ensure that Tuples with the same partitionKeys are shuffled to the same 
 Worker. The actual partitioning is done with a filter query using the 
 HashQParserPlugin. The DocSets from the HashQParserPlugin can be cached in 
 the filter cache which provides extremely high performance hash partitioning. 
 Many of the stream transformations rely on the sort order of the TupleStreams 
 (GroupByStream, MergeJoinStream, UniqueStream, FilterStream etc..). To 
 accommodate this the search results can be sorted by specific keys. The 
 /export handler can be used to sort entire result sets efficiently.
 By specifying the sort order of the results and the partition keys, documents 
 will be sorted and partitioned inside of the search engine. So when the 
 tuples hit the network they are already sorted, partitioned and headed 
 directly to correct worker node.
 *Extending The Framework*
 To extend the framework you create new TupleStream Decorators, that gather 
 custom metrics or perform custom stream transformations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383752#comment-14383752
 ] 

ASF subversion and git services commented on SOLR-7082:
---

Commit 1669557 from [~joel.bernstein] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1669557 ]

SOLR-7082: Editing Javadoc

 Streaming Aggregation for SolrCloud
 ---

 Key: SOLR-7082
 URL: https://issues.apache.org/jira/browse/SOLR-7082
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Joel Bernstein
 Fix For: Trunk, 5.1

 Attachments: SOLR-7082.patch, SOLR-7082.patch, SOLR-7082.patch, 
 SOLR-7082.patch, SOLR-7082.patch


 This issue provides a general purpose streaming aggregation framework for 
 SolrCloud. An overview of how it works can be found at this link:
 http://heliosearch.org/streaming-aggregation-for-solrcloud/
 This functionality allows SolrCloud users to perform operations that we're 
 typically done using map/reduce or a parallel computing platform.
 Here is a brief explanation of how the framework works:
 There is a new Solrj *io* package found in: *org.apache.solr.client.solrj.io*
 Key classes:
 *Tuple*: Abstracts a document in a search result as a Map of key/value pairs.
 *TupleStream*: is the base class for all of the streams. Abstracts search 
 results as a stream of Tuples.
 *SolrStream*: connects to a single Solr instance. You call the read() method 
 to iterate over the Tuples.
 *CloudSolrStream*: connects to a SolrCloud collection and merges the results 
 based on the sort param. The merge takes place in CloudSolrStream itself.
 *Decorator Streams*: wrap other streams to perform operations the streams. 
 Some examples are the UniqueStream, MergeStream and ReducerStream.
 *Going parallel with the ParallelStream and  Worker Collections*
 The io package also contains the *ParallelStream*, which wraps a TupleStream 
 and sends it to N worker nodes. The workers are chosen from a SolrCloud 
 collection. These Worker Collections don't have to hold any data, they can 
 just be used to execute TupleStreams.
 *The StreamHandler*
 The Worker nodes have a new RequestHandler called the *StreamHandler*. The 
 ParallelStream serializes a TupleStream, before it is opened, and sends it to 
 the StreamHandler on the Worker Nodes.
 The StreamHandler on each Worker node deserializes the TupleStream, opens the 
 stream, iterates the tuples and streams them back to the ParallelStream. The 
 ParallelStream performs the final merge of Metrics and can be wrapped by 
 other Streams to handled the final merged TupleStream.
 *Sorting and Partitioning search results (Shuffling)*
 Each Worker node is shuffled 1/N of the document results. There is a 
 partitionKeys parameter that can be included with each TupleStream to 
 ensure that Tuples with the same partitionKeys are shuffled to the same 
 Worker. The actual partitioning is done with a filter query using the 
 HashQParserPlugin. The DocSets from the HashQParserPlugin can be cached in 
 the filter cache which provides extremely high performance hash partitioning. 
 Many of the stream transformations rely on the sort order of the TupleStreams 
 (GroupByStream, MergeJoinStream, UniqueStream, FilterStream etc..). To 
 accommodate this the search results can be sorted by specific keys. The 
 /export handler can be used to sort entire result sets efficiently.
 By specifying the sort order of the results and the partition keys, documents 
 will be sorted and partitioned inside of the search engine. So when the 
 tuples hit the network they are already sorted, partitioned and headed 
 directly to correct worker node.
 *Extending The Framework*
 To extend the framework you create new TupleStream Decorators, that gather 
 custom metrics or perform custom stream transformations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrJ 5.x compatibility with Solr 4.x

2015-03-27 Thread Shawn Heisey
On 3/27/2015 2:29 AM, Karl Wright wrote:
 Now that Solr 5.0.0 has been released, the ManifoldCF project needs to
 figure out the appropriate time to upgrade to a newer SolrJ.  Can anyone
 point me at the backwards-compatibility policy for SolrJ 5.0.0?  Will it
 work properly with Solr 4.x?

We do strive for compatibility, but sometimes there is breakage, in
order to achieve progress.

If you are NOT running SolrCloud, compatibility between wildly different
versions of Solr and SolrJ should be excellent.  For my non-cloud
install, I am using SolrJ 5.0.0 with Solr 4.7.2 and 4.9.1 with no problems.

If you try to mix a 3.x version with a 4.x or 5.x version, or a 1.x
version with anything later, changes in the typical client coding
methods will probably be required, but it is extremely likely that it
will be possible to make it work.

SolrCloud is where the difficulty comes in.  SolrCloud is evolving at an
extremely rapid pace, and the coupling between SolrJ and the SolrCloud
API (zookeeper in particular) is *extremely* tight.

For the most part, if you are running a newer CloudSolrClient version
than the server, things should work ... but for best results, I would
not allow the version spread to get very wide, preferably not more than
one minor release, maybe two.  That means that SolrJ 5.0.0 will probably
work well with SolrCloud 4.10.x, and maybe 4.9.x, but I would not be
surprised to find incompatibilities even in a small version spread.  You
may find that a wide version spread works perfectly ... I have not tried it.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4212) Let facet queries hang off of pivots

2015-03-27 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383792#comment-14383792
 ] 

Steve Molloy commented on SOLR-4212:


Indeed, we need to find a way to reconcile, but as the JSON API will be 
experimental in 5.1, I don't think we should stop this new functionality from 
getting in. It's in line with all the other work under this unbrella, some of 
which is already in 5.0. Whether it ends up in Lucene, Solr facet component or 
facet module, the functionality is something needed and I don't think we should 
hold it up as it is still the official facet implementation after all...

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: 4.9, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, patch-4212.txt


 Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4212) Let facet queries hang off of pivots

2015-03-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383795#comment-14383795
 ] 

Yonik Seeley commented on SOLR-4212:


Experimental has nothing to do with officialness - everything we release is 
official.  It's only the degree to which we try to keep backward compatibility, 
and it can be a good idea for any sufficiently complex enough API.  So 
experimental only means hey, we still have a chance to easily improve the 
API before it gets locked down harder.

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: 4.9, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, patch-4212.txt


 Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6367) Can PrefixQuery subclass AutomatonQuery?

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383799#comment-14383799
 ] 

ASF subversion and git services commented on LUCENE-6367:
-

Commit 1669578 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1669578 ]

LUCENE-6367: correct CHANGES entry: PrefixQuery already operated in binary term 
space

 Can PrefixQuery subclass AutomatonQuery?
 

 Key: LUCENE-6367
 URL: https://issues.apache.org/jira/browse/LUCENE-6367
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6367.patch, LUCENE-6367.patch


 Spinoff/blocker for LUCENE-5879.
 It seems like PrefixQuery should simply be an AutomatonQuery rather than 
 specializing its own TermsEnum ... with maybe some performance improvements 
 to ByteRunAutomaton.run to short-circuit once it's in a sink state, 
 AutomatonTermsEnum could be just as fast as PrefixTermsEnum.
 If we can do this it will make LUCENE-5879 simpler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6367) Can PrefixQuery subclass AutomatonQuery?

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383797#comment-14383797
 ] 

ASF subversion and git services commented on LUCENE-6367:
-

Commit 1669577 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1669577 ]

LUCENE-6367: correct CHANGES entry: PrefixQuery already operated in binary term 
space

 Can PrefixQuery subclass AutomatonQuery?
 

 Key: LUCENE-6367
 URL: https://issues.apache.org/jira/browse/LUCENE-6367
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6367.patch, LUCENE-6367.patch


 Spinoff/blocker for LUCENE-5879.
 It seems like PrefixQuery should simply be an AutomatonQuery rather than 
 specializing its own TermsEnum ... with maybe some performance improvements 
 to ByteRunAutomaton.run to short-circuit once it's in a sink state, 
 AutomatonTermsEnum could be just as fast as PrefixTermsEnum.
 If we can do this it will make LUCENE-5879 simpler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2843 - Still Failing

2015-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2843/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:36660/c8n_1x3_commits_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:36660/c8n_1x3_commits_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([99FF4379704481E0:11AB7CA3DEB8EC18]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: SolrJ 5.x compatibility with Solr 4.x

2015-03-27 Thread Karl Wright
Thanks, Shawn.

We strive to be sure that the underlying dependencies of SolrJ are all the
correct versions.  So my question revolved more around any known
dependencies between SolrJ and specific Solr API elements.  We understand
that Solr Cloud moves quickly and we try to keep up by updating SolrJ.  It
would be good to know that efforts were being made to assure backwards
compatibility so that a newer Solr Cloud would at least work properly
against an older Solr instance.

If that cannot be assured, then it seems to me like the current
architecture needs revision in some way.  Clients really aren't able to
re-release code on the same schedule as Lucene, and cannot be expected to
support multiple SolrJ libraries simultaneously.

Karl


On Fri, Mar 27, 2015 at 9:10 AM, Shawn Heisey apa...@elyograg.org wrote:

 On 3/27/2015 2:29 AM, Karl Wright wrote:
  Now that Solr 5.0.0 has been released, the ManifoldCF project needs to
  figure out the appropriate time to upgrade to a newer SolrJ.  Can anyone
  point me at the backwards-compatibility policy for SolrJ 5.0.0?  Will it
  work properly with Solr 4.x?

 We do strive for compatibility, but sometimes there is breakage, in
 order to achieve progress.

 If you are NOT running SolrCloud, compatibility between wildly different
 versions of Solr and SolrJ should be excellent.  For my non-cloud
 install, I am using SolrJ 5.0.0 with Solr 4.7.2 and 4.9.1 with no problems.

 If you try to mix a 3.x version with a 4.x or 5.x version, or a 1.x
 version with anything later, changes in the typical client coding
 methods will probably be required, but it is extremely likely that it
 will be possible to make it work.

 SolrCloud is where the difficulty comes in.  SolrCloud is evolving at an
 extremely rapid pace, and the coupling between SolrJ and the SolrCloud
 API (zookeeper in particular) is *extremely* tight.

 For the most part, if you are running a newer CloudSolrClient version
 than the server, things should work ... but for best results, I would
 not allow the version spread to get very wide, preferably not more than
 one minor release, maybe two.  That means that SolrJ 5.0.0 will probably
 work well with SolrCloud 4.10.x, and maybe 4.9.x, but I would not be
 surprised to find incompatibilities even in a small version spread.  You
 may find that a wide version spread works perfectly ... I have not tried
 it.

 Thanks,
 Shawn


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-7134) Replication can still cause index corruption.

2015-03-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383858#comment-14383858
 ] 

Mark Miller commented on SOLR-7134:
---

Yeah, thanks, give me a couple hours, I'll do it shortly.

 Replication can still cause index corruption.
 -

 Key: SOLR-7134
 URL: https://issues.apache.org/jira/browse/SOLR-7134
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: Trunk, 5.1

 Attachments: SOLR-7134.patch, SOLR-7134.patch, SOLR-7134.patch, 
 SOLR-7134.patch, SOLR-7134.patch


 While we have plugged most of these holes, there appears to be another that 
 is fairly rare.
 I've seen it play out a couple ways in tests, but it looks like part of the 
 problem is that even if we decide we need a file and download it, we don't 
 care if we then cannot move it into place if it already exists.
 I'm working with a fix that does two things:
 * Fail a replication attempt if we cannot move a file into place because it 
 already exists.
 * If a replication attempt during recovery fails, on the next attempt force a 
 full replication to a new directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383912#comment-14383912
 ] 

Shawn Heisey commented on SOLR-7319:


The situation where the bug presents itself -- heavy writes via mmap -- is 
exactly what Solr (Lucene) does when indexing or merging.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 We should implement the workaround and document the impact in CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-7319:
---
Comment: was deleted

(was: Patch implementing the workaround, and documenting the fact that tools 
like jstat will no longer function.)

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-7319:
---
Comment: was deleted

(was: The situation where the bug presents itself -- heavy writes via mmap -- 
is exactly what Solr (Lucene) does when indexing or merging.)

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-7319:
---
Description: 
A twitter engineer found a bug in the JVM that contributes to GC pause problems:

http://www.evanjones.ca/jvm-mmap-pause.html

Problem summary (in case the blog post disappears):  The JVM calculates 
statistics on things like garbage collection and writes them to a file in the 
temp directory using MMAP.  If there is a lot of other MMAP write activity, 
which is precisely how Lucene accomplishes indexing and merging, it can result 
in a GC pause because the mmap write to the temp file is delayed.

We should implement the workaround in the solr start scripts (disable creation 
of the mmap statistics tempfile) and document the impact in CHANGES.txt.


  was:
A twitter engineer found a bug in the JVM that contributes to GC pause problems:

http://www.evanjones.ca/jvm-mmap-pause.html

We should implement the workaround and document the impact in CHANGES.txt.



 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383925#comment-14383925
 ] 

Shawn Heisey commented on SOLR-7319:


Attached patch implementing the workaround and documenting the impact in 
CHANGES.txt -- tools like jstat will no longer function.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-7319:
---
Attachment: SOLR-7319.patch

Found a typo in the patch.  Fixed.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-7319:
--

 Summary: Workaround the Four Month Bug causing GC pause problems
 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1


A twitter engineer found a bug in the JVM that contributes to GC pause problems:

http://www.evanjones.ca/jvm-mmap-pause.html

We should implement the workaround and document the impact in CHANGES.txt.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-7319:
---
Attachment: SOLR-7319.patch

Patch implementing the workaround, and documenting the fact that tools like 
jstat will no longer function.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 We should implement the workaround and document the impact in CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7134) Replication can still cause index corruption.

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383928#comment-14383928
 ] 

ASF subversion and git services commented on SOLR-7134:
---

Commit 1669600 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1669600 ]

SOLR-7134: Replication can still cause index corruption.

 Replication can still cause index corruption.
 -

 Key: SOLR-7134
 URL: https://issues.apache.org/jira/browse/SOLR-7134
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: Trunk, 5.1

 Attachments: SOLR-7134.patch, SOLR-7134.patch, SOLR-7134.patch, 
 SOLR-7134.patch, SOLR-7134.patch


 While we have plugged most of these holes, there appears to be another that 
 is fairly rare.
 I've seen it play out a couple ways in tests, but it looks like part of the 
 problem is that even if we decide we need a file and download it, we don't 
 care if we then cannot move it into place if it already exists.
 I'm working with a fix that does two things:
 * Fail a replication attempt if we cannot move a file into place because it 
 already exists.
 * If a replication attempt during recovery fails, on the next attempt force a 
 full replication to a new directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 799 - Still Failing

2015-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/799/

5 tests failed.
REGRESSION:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:13238/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:13238/collection1
at 
__randomizedtesting.SeedInfo.seed([D557C075CD234EE9:5D03FFAF63DF2311]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:224)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:166)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:671)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:151)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7226) Make /query/* jmx/* , requestDispatcher/*, listener initParams properties in solrconfig.xml editable

2015-03-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384239#comment-14384239
 ] 

Noble Paul commented on SOLR-7226:
--

[~shaie] yeah, I had to comment it out because it was failing. Now I have put 
it back. 

 Make /query/* jmx/* , requestDispatcher/*, listener initParams properties 
 in solrconfig.xml editable
 

 Key: SOLR-7226
 URL: https://issues.apache.org/jira/browse/SOLR-7226
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 The list of properties are
 query/useFilterForSortedQuery 
 query/queryResultWindowSize
 query/queryResultMaxDocsCached
 query/enableLazyFieldLoading
 query/boolTofilterOptimizer
 query/maxBooleanClauses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2015-03-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384245#comment-14384245
 ] 

Noble Paul commented on SOLR-5944:
--

I would recommend the solution where \_version_ is a dv field and no separate 
version for each updateable dv field. 



 Support updates of numeric DocValues
 

 Key: SOLR-5944
 URL: https://issues.apache.org/jira/browse/SOLR-5944
 Project: Solr
  Issue Type: New Feature
Reporter: Ishan Chattopadhyaya
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
 SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
 SOLR-5944.patch, SOLR-5944.patch


 LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
 really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7226) Make /query/* jmx/* , requestDispatcher/*, listener initParams properties in solrconfig.xml editable

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384237#comment-14384237
 ] 

ASF subversion and git services commented on SOLR-7226:
---

Commit 1669634 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1669634 ]

SOLR-7226: fixed the commented out testcase due to failure

 Make /query/* jmx/* , requestDispatcher/*, listener initParams properties 
 in solrconfig.xml editable
 

 Key: SOLR-7226
 URL: https://issues.apache.org/jira/browse/SOLR-7226
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 The list of properties are
 query/useFilterForSortedQuery 
 query/queryResultWindowSize
 query/queryResultMaxDocsCached
 query/enableLazyFieldLoading
 query/boolTofilterOptimizer
 query/maxBooleanClauses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80-ea-b05) - Build # 11948 - Failure!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11948/
Java: 64bit/jdk1.7.0_80-ea-b05 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
Didn't see all replicas for shard shard1 in collection1 come up within 3 
ms! ClusterState: {   collection1:{ shards:{shard1:{ 
range:8000-7fff, state:active, replicas:{ 
  core_node1:{ base_url:http://127.0.0.1:59034/hu_/pr;,
 node_name:127.0.0.1:59034_hu_%2Fpr, state:active,  
   core:collection1, leader:true},   
core_node2:{ base_url:http://127.0.0.1:36813/hu_/pr;,  
   node_name:127.0.0.1:36813_hu_%2Fpr, state:recovering,
 core:collection1, maxShardsPerNode:1, 
autoAddReplicas:false, autoCreated:true, 
router:{name:compositeId}, replicationFactor:1},   
control_collection:{ shards:{shard1:{ 
range:8000-7fff, state:active, 
replicas:{core_node1:{ 
base_url:http://127.0.0.1:57546/hu_/pr;, 
node_name:127.0.0.1:57546_hu_%2Fpr, state:active,   
  core:collection1, leader:true, 
maxShardsPerNode:1, autoAddReplicas:false, 
autoCreated:true, router:{name:compositeId}, 
replicationFactor:1}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
collection1 come up within 3 ms! ClusterState: {
  collection1:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
base_url:http://127.0.0.1:59034/hu_/pr;,
node_name:127.0.0.1:59034_hu_%2Fpr,
state:active,
core:collection1,
leader:true},
  core_node2:{
base_url:http://127.0.0.1:36813/hu_/pr;,
node_name:127.0.0.1:36813_hu_%2Fpr,
state:recovering,
core:collection1,
maxShardsPerNode:1,
autoAddReplicas:false,
autoCreated:true,
router:{name:compositeId},
replicationFactor:1},
  control_collection:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{core_node1:{
base_url:http://127.0.0.1:57546/hu_/pr;,
node_name:127.0.0.1:57546_hu_%2Fpr,
state:active,
core:collection1,
leader:true,
maxShardsPerNode:1,
autoAddReplicas:false,
autoCreated:true,
router:{name:compositeId},
replicationFactor:1}}
at 
__randomizedtesting.SeedInfo.seed([5CD88DC2579458C8:D48CB218F9683530]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1983)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 

[jira] [Updated] (SOLR-7309) bin/solr will not run when the solr home path contains a space

2015-03-27 Thread Ramkumar Aiyengar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Aiyengar updated SOLR-7309:

Attachment: SOLR-7309.patch

Modify bin/post as well..

 bin/solr will not run when the solr home path contains a space
 --

 Key: SOLR-7309
 URL: https://issues.apache.org/jira/browse/SOLR-7309
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools, Server
Affects Versions: 5.0
Reporter: Martijn Koster
Assignee: Ramkumar Aiyengar
Priority: Minor
 Attachments: SOLR-7309.patch, SOLR-7309.patch, SOLR-7309.patch


 I thought I spotted some unquoted {{$SOLR_TIP}} references in {{bin/solr}} 
 with 5.0.0, prompting me to test:
 {noformat}
 $ mv solr-5.0.0 solr-5.0.0-with' space'
 $ cd solr-5.0.0-with' space'
 $ ./bin/solr -f
 ./bin/solr: line 1161: [: too many arguments
 ./bin/solr: line 1187: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1194: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1327: cd: /Users/mak/Downloads/solr-5.0.0-with: No such file 
 or directory
 Starting Solr on port 8983 from /Users/mak/Downloads/solr-5.0.0-with 
 space/server
 Error: Could not find or load main class space.server.logs.solr_gc.log
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7309) bin/solr will not run when the solr home path contains a space

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384197#comment-14384197
 ] 

ASF subversion and git services commented on SOLR-7309:
---

Commit 1669628 from [~andyetitmoves] in branch 'dev/trunk'
[ https://svn.apache.org/r1669628 ]

SOLR-7309: Make bin/solr, bin/post work when Solr installation directory 
contains spaces

 bin/solr will not run when the solr home path contains a space
 --

 Key: SOLR-7309
 URL: https://issues.apache.org/jira/browse/SOLR-7309
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools, Server
Affects Versions: 5.0
Reporter: Martijn Koster
Assignee: Ramkumar Aiyengar
Priority: Minor
 Attachments: SOLR-7309.patch, SOLR-7309.patch, SOLR-7309.patch


 I thought I spotted some unquoted {{$SOLR_TIP}} references in {{bin/solr}} 
 with 5.0.0, prompting me to test:
 {noformat}
 $ mv solr-5.0.0 solr-5.0.0-with' space'
 $ cd solr-5.0.0-with' space'
 $ ./bin/solr -f
 ./bin/solr: line 1161: [: too many arguments
 ./bin/solr: line 1187: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1194: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1327: cd: /Users/mak/Downloads/solr-5.0.0-with: No such file 
 or directory
 Starting Solr on port 8983 from /Users/mak/Downloads/solr-5.0.0-with 
 space/server
 Error: Could not find or load main class space.server.logs.solr_gc.log
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7226) Make /query/* jmx/* , requestDispatcher/*, listener initParams properties in solrconfig.xml editable

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384250#comment-14384250
 ] 

ASF subversion and git services commented on SOLR-7226:
---

Commit 1669637 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1669637 ]

SOLR-7226: fixed the commented out testcase due to failure

 Make /query/* jmx/* , requestDispatcher/*, listener initParams properties 
 in solrconfig.xml editable
 

 Key: SOLR-7226
 URL: https://issues.apache.org/jira/browse/SOLR-7226
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 The list of properties are
 query/useFilterForSortedQuery 
 query/queryResultWindowSize
 query/queryResultMaxDocsCached
 query/enableLazyFieldLoading
 query/boolTofilterOptimizer
 query/maxBooleanClauses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2015-03-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384251#comment-14384251
 ] 

Yonik Seeley commented on SOLR-5944:


bq. I would recommend the solution where _version_ is a dv field and no 
separate version for each updateable dv field.

+1

 Support updates of numeric DocValues
 

 Key: SOLR-5944
 URL: https://issues.apache.org/jira/browse/SOLR-5944
 Project: Solr
  Issue Type: New Feature
Reporter: Ishan Chattopadhyaya
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
 SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
 SOLR-5944.patch, SOLR-5944.patch


 LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
 really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7304) SyntaxError in SpellcheckComponent

2015-03-27 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384061#comment-14384061
 ] 

James Dyer commented on SOLR-7304:
--

Can you post the original query request here?  Are you saying the collator is 
making the word to lowercase so that when it tries to test the collation, the 
query is invalid?  Or did the original query have invalid range syntax as well? 
 Also, it looks like the closing bracket in incorrect?  I'm seeing a } instead 
of a ] .  Did the original query have incorrect brackets as well or did this 
get introduced by the collator? (or did jira corrupt this?)

 SyntaxError in SpellcheckComponent
 --

 Key: SOLR-7304
 URL: https://issues.apache.org/jira/browse/SOLR-7304
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 4.9
 Environment: Jetty
 Debian
Reporter: Hakim
Priority: Minor
  Labels: range, spellchecker
 Fix For: 4.9


 I have an error with SpellCheckComponent since I have added this 
 SearchComponent to /select RequestHandler (see solrconfig.xml).
   requestHandler name=/select class=solr.SearchHandler
 !-- default values for query parameters can be specified, these
  will be overridden by parameters in the request
   --
  lst name=defaults
str name=echoParamsexplicit/str
int name=rows10/int
str name=dftitre/str
 !-- h4k1m: configure spellcheck if enabled --
str name=spellcheckon/str
str name=spellcheck.dictionarydefault/str
str name=spellcheck.extendedResultstrue/str
str name=spellcheck.count3/str
str name=spellcheck.alternativeTermCount3/str
str name=spellcheck.maxResultsForSuggest5/str
str name=spellcheck.collatetrue/str
str name=spellcheck.collateExtendedResultstrue/str
str name=spellcheck.maxCollationTries10/str
str name=spellcheck.maxCollations1/str
str name=spellcheck.onlyMorePopularfalse/str
str name=combineWordsfalse/str
  /lst
 The error seems to be related to range queries, with the [.. to ..] written 
 in lowercase. The query performed by the SpellCheck component using 'to' in 
 lower case throws the RANGE_GOOP error.
 101615 [qtp2145626092-38] WARN  org.apache.solr.spelling.SpellCheckCollator  
 - Exception trying to re-query to check if a spell check possibility would 
 return any hits.
 org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: 
 Cannot parse 'offredemande:offre AND categorieparente:audi AND 
 prix:[216 to 2250008} AND anneemodele:[2003 to 2008} AND etat:nauf': 
 Encountered  RANGE_GOOP 2250008  at line 1, column 68.
 Was expecting one of:
 ] ...
 } ...
 at 
 org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:205)
 at 
 org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
 at 
 org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:230)
 at 
 org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:197)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1962)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1645)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:564)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:498)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:199)
 at 
 

[jira] [Commented] (SOLR-1387) Add more search options for filtering field facets.

2015-03-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384060#comment-14384060
 ] 

Michael McCandless commented on SOLR-1387:
--

I'm concerned about the StringHelper.contains that was added for this issue:

  * Its signature implies it operates on BytesRef, but under the hood it 
secretly assumes the bytes are valid UTF-8 (only for the ignoreCase=true case)

  * It also secretly assumes Locale.ENGLISH for downcasing but the incoming 
UTF-8 bytes may not be English

  * It has potentially poor performance compared to known algos e.g. 
http://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm

Can we move this method out for now, e.g. not put it in the shared StringHelper 
utility class?

 Add more search options for filtering field facets.
 ---

 Key: SOLR-1387
 URL: https://issues.apache.org/jira/browse/SOLR-1387
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Anil Khadka
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: SOLR-1387.patch


 Currently for filtering the facets, we have to use prefix (which use 
 String.startsWith() in java). 
 We can add some parameters like
 * facet.iPrefix : this would act like case-insensitive search. (or ---  
 facet.prefix=afacet.caseinsense=on)
 * facet.regex : this is pure regular expression search (which obviously would 
 be expensive if issued).
 Moreover, allowing multiple filtering for same field would be great like
 facet.prefix=a OR facet.prefix=A ... sth like this.
 All above concepts could be equally applicable to TermsComponent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384124#comment-14384124
 ] 

Paul Elschot commented on LUCENE-6308:
--

From the current TermSpans.java:
{code}
  public int end() {
return position + 1;
  }
{code}
For position == Integer.MAX_VALUE that has never behaved well.

 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384013#comment-14384013
 ] 

Robert Muir commented on LUCENE-6308:
-

Alan is correct. Mike, the issue is you don't need 2.1B tokens to hit this. And 
I don't know where its a reserved value comes from. At least nothing is 
reserving it, in indexwriter or elsewhere. Instead we just check for 
negatives/overflow.

I've seen people use large posInc gaps between fields. This can make huge 
position numbers. Also if someone forgets clearAttributes the positions grow 
exponentially. Sure its bad, but for small docs i bet plenty of people have 
HUGE positions and don't realize it.

Can we just change the constant to -2? Then the problem is solved.

 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-1387) Add more search options for filtering field facets.

2015-03-27 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reopened SOLR-1387:
--

 Add more search options for filtering field facets.
 ---

 Key: SOLR-1387
 URL: https://issues.apache.org/jira/browse/SOLR-1387
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Anil Khadka
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: SOLR-1387.patch


 Currently for filtering the facets, we have to use prefix (which use 
 String.startsWith() in java). 
 We can add some parameters like
 * facet.iPrefix : this would act like case-insensitive search. (or ---  
 facet.prefix=afacet.caseinsense=on)
 * facet.regex : this is pure regular expression search (which obviously would 
 be expensive if issued).
 Moreover, allowing multiple filtering for same field would be great like
 facet.prefix=a OR facet.prefix=A ... sth like this.
 All above concepts could be equally applicable to TermsComponent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6374) remove unused BitUtil code

2015-03-27 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6374:
---

 Summary: remove unused BitUtil code
 Key: LUCENE-6374
 URL: https://issues.apache.org/jira/browse/LUCENE-6374
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6374.patch

There is some dead unused and untested code here.. we should nuke it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6374) remove unused BitUtil code

2015-03-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6374:

Attachment: LUCENE-6374.patch

 remove unused BitUtil code
 --

 Key: LUCENE-6374
 URL: https://issues.apache.org/jira/browse/LUCENE-6374
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6374.patch


 There is some dead unused and untested code here.. we should nuke it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384020#comment-14384020
 ] 

Timothy Potter commented on SOLR-7319:
--

+1 ~ great find Shawn!

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1387) Add more search options for filtering field facets.

2015-03-27 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384068#comment-14384068
 ] 

Alan Woodward commented on SOLR-1387:
-

bq. Can we move this method out for now, e.g. not put it in the shared 
StringHelper utility class?

Sure, we could move it into the Solr faceting code.  I'm away from a 
svn-accessible machine for 10 days or so now, I can do it when I get back or 
feel free to move it yourself.

 Add more search options for filtering field facets.
 ---

 Key: SOLR-1387
 URL: https://issues.apache.org/jira/browse/SOLR-1387
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Anil Khadka
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: SOLR-1387.patch


 Currently for filtering the facets, we have to use prefix (which use 
 String.startsWith() in java). 
 We can add some parameters like
 * facet.iPrefix : this would act like case-insensitive search. (or ---  
 facet.prefix=afacet.caseinsense=on)
 * facet.regex : this is pure regular expression search (which obviously would 
 be expensive if issued).
 Moreover, allowing multiple filtering for same field would be great like
 facet.prefix=a OR facet.prefix=A ... sth like this.
 All above concepts could be equally applicable to TermsComponent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7309) bin/solr will not run when the solr home path contains a space

2015-03-27 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384091#comment-14384091
 ] 

Ramkumar Aiyengar commented on SOLR-7309:
-

Attached a revised patch which covers a few more cases, and switches over to 
use bash arrays where it makes sense..

 bin/solr will not run when the solr home path contains a space
 --

 Key: SOLR-7309
 URL: https://issues.apache.org/jira/browse/SOLR-7309
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools, Server
Affects Versions: 5.0
Reporter: Martijn Koster
Assignee: Ramkumar Aiyengar
Priority: Minor
 Attachments: SOLR-7309.patch, SOLR-7309.patch


 I thought I spotted some unquoted {{$SOLR_TIP}} references in {{bin/solr}} 
 with 5.0.0, prompting me to test:
 {noformat}
 $ mv solr-5.0.0 solr-5.0.0-with' space'
 $ cd solr-5.0.0-with' space'
 $ ./bin/solr -f
 ./bin/solr: line 1161: [: too many arguments
 ./bin/solr: line 1187: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1194: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1327: cd: /Users/mak/Downloads/solr-5.0.0-with: No such file 
 or directory
 Starting Solr on port 8983 from /Users/mak/Downloads/solr-5.0.0-with 
 space/server
 Error: Could not find or load main class space.server.logs.solr_gc.log
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_76) - Build # 4485 - Still Failing!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4485/
Java: 64bit/jdk1.7.0_76 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
Could not get the port for ZooKeeper server

Stack Trace:
java.lang.RuntimeException: Could not get the port for ZooKeeper server
at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:506)
at 
org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:225)
at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:90)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.SimpleCollectionCreateDeleteTest.test

Error Message:
Could not get the port for ZooKeeper server

Stack Trace:
java.lang.RuntimeException: Could not get the port for ZooKeeper server
at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:506)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.distribSetUp(AbstractDistribZkTestBase.java:62)
at 

Re: SolrJ 5.x compatibility with Solr 4.x

2015-03-27 Thread Shalin Shekhar Mangar
Solrj 4.x will not work with collections created from Solr 5.0 but
otherwise I am not aware of any incompatibility between 4.x and 5. There
was a compat break in Solrj during 4.8 I think but it was due to a bug. The
standard advise on upgrading is to upgrade the client first and then the
server. Solrj 5.0 will work fine with Solr 4.x.

On Fri, Mar 27, 2015 at 6:26 AM, Karl Wright daddy...@gmail.com wrote:

 Thanks, Shawn.

 We strive to be sure that the underlying dependencies of SolrJ are all the
 correct versions.  So my question revolved more around any known
 dependencies between SolrJ and specific Solr API elements.  We understand
 that Solr Cloud moves quickly and we try to keep up by updating SolrJ.  It
 would be good to know that efforts were being made to assure backwards
 compatibility so that a newer Solr Cloud would at least work properly
 against an older Solr instance.

 If that cannot be assured, then it seems to me like the current
 architecture needs revision in some way.  Clients really aren't able to
 re-release code on the same schedule as Lucene, and cannot be expected to
 support multiple SolrJ libraries simultaneously.

 Karl


 On Fri, Mar 27, 2015 at 9:10 AM, Shawn Heisey apa...@elyograg.org wrote:

 On 3/27/2015 2:29 AM, Karl Wright wrote:
  Now that Solr 5.0.0 has been released, the ManifoldCF project needs to
  figure out the appropriate time to upgrade to a newer SolrJ.  Can anyone
  point me at the backwards-compatibility policy for SolrJ 5.0.0?  Will it
  work properly with Solr 4.x?

 We do strive for compatibility, but sometimes there is breakage, in
 order to achieve progress.

 If you are NOT running SolrCloud, compatibility between wildly different
 versions of Solr and SolrJ should be excellent.  For my non-cloud
 install, I am using SolrJ 5.0.0 with Solr 4.7.2 and 4.9.1 with no
 problems.

 If you try to mix a 3.x version with a 4.x or 5.x version, or a 1.x
 version with anything later, changes in the typical client coding
 methods will probably be required, but it is extremely likely that it
 will be possible to make it work.

 SolrCloud is where the difficulty comes in.  SolrCloud is evolving at an
 extremely rapid pace, and the coupling between SolrJ and the SolrCloud
 API (zookeeper in particular) is *extremely* tight.

 For the most part, if you are running a newer CloudSolrClient version
 than the server, things should work ... but for best results, I would
 not allow the version spread to get very wide, preferably not more than
 one minor release, maybe two.  That means that SolrJ 5.0.0 will probably
 work well with SolrCloud 4.10.x, and maybe 4.9.x, but I would not be
 surprised to find incompatibilities even in a small version spread.  You
 may find that a wide version spread works perfectly ... I have not tried
 it.

 Thanks,
 Shawn


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Created] (SOLR-7320) SolrJ - add ability to set timeouts at request level

2015-03-27 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-7320:
--

 Summary: SolrJ - add ability to set timeouts at request level
 Key: SOLR-7320
 URL: https://issues.apache.org/jira/browse/SOLR-7320
 Project: Solr
  Issue Type: Improvement
  Components: clients - java, SolrJ
Affects Versions: 5.0
Reporter: Shawn Heisey


There are sometimes good reasons to have a default set of timeouts for the 
httpclient, but utilize a different set of timeouts for individual requests.  
You might want to have a 5 minute socket timeout for requests normally, which 
you set when you create the HttpClient object that the clients use ... but when 
you send an optimize request, or a particularly large update request, five 
minutes probably isn't going to be enough -- you might want that request to 
have a 30 minute socket timeout instead.

If setting the timeout values is accomplished using parameters in the request 
(which get removed before sending to Solr) instead of adding a parameter to 
method signatures, then it could be utilized on all request types and even with 
a subset of the sugar methods, like SolrClient#query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-27 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-6339:
-
Attachment: LUCENE-6339.patch

Thanks [~mikemccand] for the suggestions!

Updated Patch:
 - separate out SuggestScoreDocPriorityQueue (break ties with docID)
 - use long to calculate maxQueueSize
 - minor changes

 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, 
 LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 PostingsFormat completionPostingsFormat = new 
 Completion50PostingsFormat();
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return completionPostingsFormat;
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 Hits are collected in descending order of the suggestion's weight 
 {code:java}
 // full options for TopSuggestDocs (TopDocs)
 TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
 // full options for Collector
 // note: only collects does not score
 void suggest(String field, CharSequence key, int num, Filter filter, 
 TopSuggestDocsCollector collector) 
 {code}
 h4. Analyzer
 *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
 suggest field only parameters. 
 {code:java}
 CompletionAnalyzer(Analyzer analyzer, boolean preserveSep, boolean 
 preservePositionIncrements, int maxGraphExpansions)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-7319:
---
Attachment: SOLR-7319.patch

New patch, this time against trunk, resolving a merge conflict on CHANGES.txt 
with another commit.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7309) bin/solr will not run when the solr home path contains a space

2015-03-27 Thread Ramkumar Aiyengar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Aiyengar updated SOLR-7309:

Attachment: SOLR-7309.patch

 bin/solr will not run when the solr home path contains a space
 --

 Key: SOLR-7309
 URL: https://issues.apache.org/jira/browse/SOLR-7309
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools, Server
Affects Versions: 5.0
Reporter: Martijn Koster
Assignee: Ramkumar Aiyengar
Priority: Minor
 Attachments: SOLR-7309.patch, SOLR-7309.patch


 I thought I spotted some unquoted {{$SOLR_TIP}} references in {{bin/solr}} 
 with 5.0.0, prompting me to test:
 {noformat}
 $ mv solr-5.0.0 solr-5.0.0-with' space'
 $ cd solr-5.0.0-with' space'
 $ ./bin/solr -f
 ./bin/solr: line 1161: [: too many arguments
 ./bin/solr: line 1187: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1194: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1327: cd: /Users/mak/Downloads/solr-5.0.0-with: No such file 
 or directory
 Starting Solr on port 8983 from /Users/mak/Downloads/solr-5.0.0-with 
 space/server
 Error: Could not find or load main class space.server.logs.solr_gc.log
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6232) Replace ValueSource context Map with a more concrete data type

2015-03-27 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384436#comment-14384436
 ] 

Mike Drob commented on LUCENE-6232:
---

[~ysee...@gmail.com] - since you're backporting a bunch of other HelioSearch 
features already, do you think this will make it in transitively as well?

 Replace ValueSource context Map with a more concrete data type
 --

 Key: LUCENE-6232
 URL: https://issues.apache.org/jira/browse/LUCENE-6232
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mike Drob
 Attachments: LUCENE-6232.patch, LUCENE-6232.patch, LUCENE-6232.patch


 Inspired by LUCENE-3973
 The context object used by ValueSource and friends is a raw Map that provides 
 no type safety guarantees. In our current state, there are lots of warnings 
 about unchecked casts, raw types, and generally unsafe code from the 
 compiler's perspective.
 There are several common patterns and types of Objects that we store in the 
 context. It would be beneficial to instead use a class with typed methods for 
 get/set of Scorer, Weights, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2058 - Failure!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2058/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.test

Error Message:
init query failed: 
{main(facet=truefacet.pivot=%7B%21stats%3Dst0%7Dpivot_tf1facet.pivot=%7B%21stats%3Dst2%7Dpivot_tf1%2Cdense_pivot_ti%2Cdense_pivot_y_s1facet.limit=17facet.offset=7facet.overrequest.count=3),extra(rows=0q=*%3A*fq=id%3A%5B*+TO+516%5D)}:
 No live SolrServers available to handle this 
request:[http://127.0.0.1:52009/collection1, 
http://127.0.0.1:51970/collection1, http://127.0.0.1:52002/collection1, 
http://127.0.0.1:52005/collection1]

Stack Trace:
java.lang.RuntimeException: init query failed: 
{main(facet=truefacet.pivot=%7B%21stats%3Dst0%7Dpivot_tf1facet.pivot=%7B%21stats%3Dst2%7Dpivot_tf1%2Cdense_pivot_ti%2Cdense_pivot_y_s1facet.limit=17facet.offset=7facet.overrequest.count=3),extra(rows=0q=*%3A*fq=id%3A%5B*+TO+516%5D)}:
 No live SolrServers available to handle this 
request:[http://127.0.0.1:52009/collection1, 
http://127.0.0.1:51970/collection1, http://127.0.0.1:52002/collection1, 
http://127.0.0.1:52005/collection1]
at 
__randomizedtesting.SeedInfo.seed([8340AFE1AF633AAE:B14903B019F5756]:0)
at 
org.apache.solr.cloud.TestCloudPivotFacet.assertPivotCountsAreCorrect(TestCloudPivotFacet.java:255)
at 
org.apache.solr.cloud.TestCloudPivotFacet.test(TestCloudPivotFacet.java:228)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Issue Comment Deleted] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic updated SOLR-7319:
---
Comment: was deleted

(was: bq. tools like jstat will no longer function

Sounds problematic, no?)

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384472#comment-14384472
 ] 

Otis Gospodnetic commented on SOLR-7319:


bq. tools like jstat will no longer function

Sounds problematic, no?

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7134) Replication can still cause index corruption.

2015-03-27 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-7134.
---
Resolution: Fixed

 Replication can still cause index corruption.
 -

 Key: SOLR-7134
 URL: https://issues.apache.org/jira/browse/SOLR-7134
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: Trunk, 5.1

 Attachments: SOLR-7134.patch, SOLR-7134.patch, SOLR-7134.patch, 
 SOLR-7134.patch, SOLR-7134.patch


 While we have plugged most of these holes, there appears to be another that 
 is fairly rare.
 I've seen it play out a couple ways in tests, but it looks like part of the 
 problem is that even if we decide we need a file and download it, we don't 
 care if we then cannot move it into place if it already exists.
 I'm working with a fix that does two things:
 * Fail a replication attempt if we cannot move a file into place because it 
 already exists.
 * If a replication attempt during recovery fails, on the next attempt force a 
 full replication to a new directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6816) Review SolrCloud Indexing Performance.

2015-03-27 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384462#comment-14384462
 ] 

Timothy Potter commented on SOLR-6816:
--

I think you mis-understood my point. I wasn't talking about retrying documents 
in the same UpdateRequest. If a Map/Reduce task fails, the HDFS block is 
retried entirely, meaning a Hadoop-based indexing job may send the same docs 
that have already been added so using overwite=false is dangerous when doing 
this type of bulk indexing. The solution proposed in SOLR-3382 would be great 
to have as well though.

I'm working on implementing the version bucket initialization approach that 
Yonik suggested but I'm wondering if we can do better with the hand-off from 
leader to replica? For instance, the leader knows if a doc didn't exist 
(because it's doing a similar ID lookup), so why can't the leader simply share 
that information with the replica, thus allowing the replica to avoid looking 
up a doc that doesn't exist. I'll get the version bucket initialization done 
and then see if this is still a concern, but just bugs me that we're wasting 
all this CPU on looking for docs the leader already knows don't exist.

 Review SolrCloud Indexing Performance.
 --

 Key: SOLR-6816
 URL: https://issues.apache.org/jira/browse/SOLR-6816
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Priority: Critical
 Attachments: SolrBench.pdf


 We have never really focused on indexing performance, just correctness and 
 low hanging fruit. We need to vet the performance and try to address any 
 holes.
 Note: A common report is that adding any replication is very slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6816) Review SolrCloud Indexing Performance.

2015-03-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384479#comment-14384479
 ] 

Yonik Seeley commented on SOLR-6816:


bq. For instance, the leader knows if a doc didn't exist (because it's doing a 
similar ID lookup) [...]

I think the leader doesn't need to do this lookup since it is the one 
establishing the order?  Although there is other stuff like peer sync and 
recovery that complicate the picture.
Anyway, the primary reordering issue is between the leader and replica, so even 
if the leader says hey, previous doc exists for id:1234, it could be 
reordered and hence be false by the time it gets to the replica.

Longer term, I'd like to see reordering eliminated.  The primary mechanism 
would be via a single communication channel between the leader and it's 
replicas that would enforce ordering.

 Review SolrCloud Indexing Performance.
 --

 Key: SOLR-6816
 URL: https://issues.apache.org/jira/browse/SOLR-6816
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Priority: Critical
 Attachments: SolrBench.pdf


 We have never really focused on indexing performance, just correctness and 
 low hanging fruit. We need to vet the performance and try to address any 
 holes.
 Note: A common report is that adding any replication is very slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6816) Review SolrCloud Indexing Performance.

2015-03-27 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384519#comment-14384519
 ] 

Timothy Potter commented on SOLR-6816:
--

Seems like the leader is doing a lookup for the existing doc in 
DistributedUpdateProcessor#versionAdd:

{code}
boolean updated = getUpdatedDocument(cmd, versionOnUpdate);
{code}

Anyway, it seemed like I was treading on dangerous ground going down that path. 
Part of this re-ordering / mixing deletes / updates, etc. is why I liked the 
bulkAdd parameter ... I want all this version checking safety when I need it, 
but if I'm pushing in 100's of thousands of docs per second (e.g. logs), I 
don't want any of that slowing me down unnecessarily. But I'll hold off on that 
until I've measured the improvement of initializing the version buckets 
correctly. Thanks for your continued support on this!


 Review SolrCloud Indexing Performance.
 --

 Key: SOLR-6816
 URL: https://issues.apache.org/jira/browse/SOLR-6816
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Priority: Critical
 Attachments: SolrBench.pdf


 We have never really focused on indexing performance, just correctness and 
 low hanging fruit. We need to vet the performance and try to address any 
 holes.
 Note: A common report is that adding any replication is very slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)

2015-03-27 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384536#comment-14384536
 ] 

Shawn Heisey commented on SOLR-7236:


bq. I am trying to understand why Solr wants to move away from being a web-app 
running in a servlet-container. A servlet-container can deal with all of the 
things you mention in a standardized way

Because as long as the network and HTTP are handled by software that is outside 
of Solr, Solr has absolutely no ability to control it.  Ideally, you should be 
able to drop a configuration right in a handler definition (such as the one for 
/select) found in solrconfig.xml, listing security credentials 
(username/password, IP address, perhaps even certificate information) that you 
are willing to accept for that handler, along with exceptions or credentials 
that will allow SolrCloud inter-node communication to work.

Bringing the servlet container under our control as we did with 5.0 (with 
initial work in 4.10) allows us to tell people how to configure the servlet 
container for security in a predictable manner, but it is still not *Solr* and 
its configuration that's controlling it.

 Securing Solr (umbrella issue)
 --

 Key: SOLR-7236
 URL: https://issues.apache.org/jira/browse/SOLR-7236
 Project: Solr
  Issue Type: New Feature
Reporter: Jan Høydahl
  Labels: Security

 This is an umbrella issue for adding security to Solr. The discussion here 
 should discuss real user needs and high-level strategy, before deciding on 
 implementation details. All work will be done in sub tasks and linked issues.
 Solr has not traditionally concerned itself with security. And It has been a 
 general view among the committers that it may be better to stay out of it to 
 avoid blood on our hands in this mine-field. Still, Solr has lately seen 
 SSL support, securing of ZK, and signing of jars, and discussions have begun 
 about securing operations in Solr.
 Some of the topics to address are
 * User management (flat file, AD/LDAP etc)
 * Authentication (Admin UI, Admin and data/query operations. Tons of auth 
 protocols: basic, digest, oauth, pki..)
 * Authorization (who can do what with what API, collection, doc)
 * Pluggability (no user's needs are equal)
 * And we could go on and on but this is what we've seen the most demand for



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384276#comment-14384276
 ] 

Michael McCandless commented on LUCENE-6308:


bq. I've seen people use large posInc gaps between fields. This can make huge 
position numbers. Also if someone forgets clearAttributes the positions grow 
exponentially. Sure its bad, but for small docs i bet plenty of people have 
HUGE positions and don't realize it.

I think such examples are really abuse cases?  We shouldn't design
for abuse cases...

Also such users (jumping by enormous position increments each time)
are unlikely to precisely hit Integer.MAX_VALUE ... they are more
likely to overflow it.

What I find compelling about Integer.MAX_VALUE is it makes priority
queues that are merge-sorting N position iterators work naturally,
so they can simply compare by position, and only once all iterators
are on a position must they check whether that position is
Integer.MAX_VALUE.  But if we use -2, then every time we .nextPosition
each iterator we must check if it's ended.

I do agree we should fix IW to detect this during indexing, and
CheckIndex to detect it.

I also like the consistency with NO_MORE_DOCS.


 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384267#comment-14384267
 ] 

Paul Elschot commented on LUCENE-6308:
--

bq. The patch just needs a minor fix to NearSpans' TwoPhaseIterator now
that the approximation is passed to the ctor.

The git mirrors are a little behind again, see INFRA-9182.


 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384267#comment-14384267
 ] 

Paul Elschot edited comment on LUCENE-6308 at 3/27/15 6:09 PM:
---

bq. The patch just needs a minor fix to NearSpans' TwoPhaseIterator now that 
the approximation is passed to the ctor.

The git mirrors are a little behind again, see INFRA-9182.



was (Author: paul.elsc...@xs4all.nl):
bq. The patch just needs a minor fix to NearSpans' TwoPhaseIterator now
that the approximation is passed to the ctor.

The git mirrors are a little behind again, see INFRA-9182.


 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b06) - Build # 12116 - Failure!

2015-03-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12116/
Java: 32bit/jdk1.8.0_60-ea-b06 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([C3E9BE50B4865000:591DC3B22A1CCC3C]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:794)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: ?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint 
name=QTime1/int/lstresult name=response numFound=0 
start=0/result
/response

request was:q=id:529qt=standardstart=0rows=20version=2.2
at 

[jira] [Commented] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384405#comment-14384405
 ] 

Michael McCandless commented on LUCENE-6339:


I think the tie break should be a.doc  b.doc, for consistency with Lucene?

I.e., on a score tie, the smaller doc ID should sorter earlier than the bigger 
doc ID?

Otherwise +1 to commit!  Thanks [~areek]!

 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, 
 LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 PostingsFormat completionPostingsFormat = new 
 Completion50PostingsFormat();
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return completionPostingsFormat;
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 Hits are collected in descending order of the suggestion's weight 
 {code:java}
 // full options for TopSuggestDocs (TopDocs)
 TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
 // full options for Collector
 // note: only collects does not score
 void suggest(String field, CharSequence key, int num, Filter filter, 
 TopSuggestDocsCollector collector) 
 {code}
 h4. Analyzer
 *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
 suggest field only parameters. 
 {code:java}
 CompletionAnalyzer(Analyzer analyzer, boolean preserveSep, boolean 
 preservePositionIncrements, int maxGraphExpansions)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2844 - Still Failing

2015-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2844/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:56434/c8n_1x3_commits_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:56434/c8n_1x3_commits_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([888997ABE0D7069C:DDA8714E2B6B64]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6308) SpansEnum, deprecate Spans

2015-03-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384283#comment-14384283
 ] 

Michael McCandless commented on LUCENE-6308:


bq. From the current TermSpans.java:

Hmm so if your index has a position = Integer.MAX_VALUE, things already won't 
work for SpanTermQuery... I wonder whether PhraseQuery works today when 
position is Integer.MAX_VALUE.


 SpansEnum, deprecate Spans
 --

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Updated] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-27 Thread Erick Erickson
Talk about arcane! Good find!

On Fri, Mar 27, 2015 at 9:48 AM, Shawn Heisey (JIRA) j...@apache.org wrote:

  [ 
 https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
  ]

 Shawn Heisey updated SOLR-7319:
 ---
 Attachment: SOLR-7319.patch

 New patch, this time against trunk, resolving a merge conflict on CHANGES.txt 
 with another commit.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in 
 the temp directory using MMAP.  If there is a lot of other MMAP write 
 activity, which is precisely how Lucene accomplishes indexing and merging, 
 it can result in a GC pause because the mmap write to the temp file is 
 delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7290) Change schemaless _text and copyField

2015-03-27 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384359#comment-14384359
 ] 

Erick Erickson commented on SOLR-7290:
--

Personally I'd slightly prefer _text_ to just text, although I can argue 
either way. _text_ is clearly a reserved field, OTOH text is something we 
already use as the df.

_text is a distant third IMO since it's too easy to confuse with a dynamic 
field.

FWIW,
Erick

 Change schemaless _text and copyField
 -

 Key: SOLR-7290
 URL: https://issues.apache.org/jira/browse/SOLR-7290
 Project: Solr
  Issue Type: Bug
Reporter: Mike Murphy
 Fix For: 5.1


 schemaless configs should remove copyField to _text
 or at least change _text name.
 http://markmail.org/message/v6djadk5azx6k4gv
 This default led to bad indexing performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6374) remove unused BitUtil code

2015-03-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384407#comment-14384407
 ] 

Michael McCandless commented on LUCENE-6374:


+1

 remove unused BitUtil code
 --

 Key: LUCENE-6374
 URL: https://issues.apache.org/jira/browse/LUCENE-6374
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6374.patch


 There is some dead unused and untested code here.. we should nuke it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384583#comment-14384583
 ] 

Uwe Schindler commented on LUCENE-6339:
---

I just reviewed the patch, too. I like the API, but have not yet looked into it 
closely like Mike - I just skimmed it.

Just one question: What happens if 2 documents have the same SuggestField and 
same suggestion presented to user? This would now produce duplicates, right? I 
was just thinking about how to prevent that (coming from Elasticsearch world).

 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, 
 LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 PostingsFormat completionPostingsFormat = new 
 Completion50PostingsFormat();
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return completionPostingsFormat;
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 Hits are collected in descending order of the suggestion's weight 
 {code:java}
 // full options for TopSuggestDocs (TopDocs)
 TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
 // full options for Collector
 // note: only collects does not score
 void suggest(String field, CharSequence key, int num, Filter filter, 
 TopSuggestDocsCollector collector) 
 {code}
 h4. Analyzer
 *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
 suggest field only parameters. 
 {code:java}
 CompletionAnalyzer(Analyzer analyzer, boolean preserveSep, boolean 
 preservePositionIncrements, int maxGraphExpansions)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384604#comment-14384604
 ] 

Uwe Schindler commented on LUCENE-6339:
---

+1

 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, 
 LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 PostingsFormat completionPostingsFormat = new 
 Completion50PostingsFormat();
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return completionPostingsFormat;
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 Hits are collected in descending order of the suggestion's weight 
 {code:java}
 // full options for TopSuggestDocs (TopDocs)
 TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
 // full options for Collector
 // note: only collects does not score
 void suggest(String field, CharSequence key, int num, Filter filter, 
 TopSuggestDocsCollector collector) 
 {code}
 h4. Analyzer
 *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
 suggest field only parameters. 
 {code:java}
 CompletionAnalyzer(Analyzer analyzer, boolean preserveSep, boolean 
 preservePositionIncrements, int maxGraphExpansions)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7291) ChaosMonkey should create more mayhem with ZK availability

2015-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384647#comment-14384647
 ] 

ASF subversion and git services commented on SOLR-7291:
---

Commit 1669687 from [~andyetitmoves] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1669687 ]

SOLR-7291: Test indexing on ZK disconnect with ChaosMonkey tests

 ChaosMonkey should create more mayhem with ZK availability
 --

 Key: SOLR-7291
 URL: https://issues.apache.org/jira/browse/SOLR-7291
 Project: Solr
  Issue Type: Improvement
Reporter: Ramkumar Aiyengar
Assignee: Ramkumar Aiyengar
Priority: Minor
 Attachments: SOLR-7291.patch, SOLR-7291.patch


 Some things ChaosMonkey and it's users can do:
  - It should stop, pause and restart ZK once in a while
  - Tests using CM should check with indexing stalling when this happens



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7202) Remove deprecated DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION Collection API actions

2015-03-27 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384682#comment-14384682
 ] 

Shalin Shekhar Mangar commented on SOLR-7202:
-

+1 LGTM

 Remove deprecated DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION 
 Collection API actions
 -

 Key: SOLR-7202
 URL: https://issues.apache.org/jira/browse/SOLR-7202
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Varun Thacker
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: SOLR-7202.patch


 I think we can remove DELETECOLLECTION, CREATECOLLECTION, RELOADCOLLECTION 
 action types.
 It was marked as deprecated but didn't get removed in 5.0
 While doing a quick check I saw that we can remove Overseer.REMOVECOLLECTION 
 and Overseer.REMOVESHARD
 Any reason why it should be a bad idea?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7305) BlendedInfixLookupFactory swallows root ioexception when it occurs

2015-03-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384616#comment-14384616
 ] 

ASF GitHub Bot commented on SOLR-7305:
--

Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/137


 BlendedInfixLookupFactory swallows root ioexception when it occurs
 --

 Key: SOLR-7305
 URL: https://issues.apache.org/jira/browse/SOLR-7305
 Project: Solr
  Issue Type: Bug
  Components: Suggester
Affects Versions: 4.10.4, Trunk
Reporter: Stephan Lagraulet
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.1


 See BlendedInfixLookupFactory.java line 105:
 {code:java}
 try {
   return new 
 BlendedInfixSuggester(core.getSolrConfig().luceneMatchVersion, 
FSDirectory.open(new File(indexPath)),
indexAnalyzer, queryAnalyzer, 
 minPrefixChars,
blenderType, numFactor);
 } catch (IOException e) {
   throw new RuntimeException();
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Fix SOLR-7305

2015-03-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/137


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2845 - Still Failing

2015-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2845/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:37664/c8n_1x3_commits_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:37664/c8n_1x3_commits_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([3C00720EB53D0BA0:B4544DD41BC16658]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7275) Pluggable authorization module in Solr

2015-03-27 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7275:
---
Description: 
Solr needs an interface that makes it easy for different authorization systems 
to be plugged into it. Here's what I plan on doing:

Define an interface {{SolrAuthorizationPlugin}} with one single method 
{{isAuthorized}}. This would take in a {{SolrRequestContext}} object and return 
an {{SolrAuthorizationResponse}} object. The object as of now would only 
contain a single boolean value but in the future could contain more information 
e.g. ACL for document filtering etc.

The reason why we need a context object is so that the plugin doesn't need to 
understand Solr's capabilities e.g. how to extract the name of the collection 
or other information from the incoming request as there are multiple ways to 
specify the target collection for a request. Similarly request type can be 
specified by {{qt}} or {{/handler_name}}.

Flow:
Request - SolrDispatchFilter - isAuthorized(context) - Process/Return.

{code}
public interface SolrAuthorizationPlugin {
  public SolrAuthorizationResponse isAuthorized();
}
{code}

{code}
public  class SolrRequestContext {
  UserInfo; // Will contain user context from the authentication layer.
  HTTPRequest request;
  Enum OperationType; // Correlated with user roles.
  String[] CollectionsAccessed;
  String[] FieldsAccessed;
  String Resource;
}

{code}

{code}
public class SolrAuthorizationResponse {
  boolean authorized;

  public boolean isAuthorized();
}
{code}

User Roles: 
* Admin
* Collection Level:
  * Query
  * Update
  * Admin

Using this framework, an implementation could be written for specific security 
systems e.g. Apache Ranger or Sentry. It would keep all the security system 
specific code out of Solr.

  was:
Solr needs an interface that makes it easy for different authorization systems 
to be plugged into it. Here's what I plan on doing:

Define an interface {{SolrAuthorizationPlugin}} with one single method 
{{isAuthorized}}. This would take in a {{SolrRequestContext}} object and return 
an {{SolrAuthorizationResponse}} object. The object as of now would only 
contain a single boolean value but in the future could contain more information 
e.g. ACL for document filtering etc.

The reason why we need a context object is so that the plugin doesn't need to 
understand Solr's capabilities e.g. how to extract the name of the collection 
or other information from the incoming request as there are multiple ways to 
specify the target collection for a request. Similarly request type can be 
specified by {{qt}} or {{/handler_name}}.

Flow:
Request - SolrDispatchFilter - isAuthorized(context) - Process/Return.

{code}
public interface SolrAuthorizationPlugin {
  public AuthResponse isAuthorized();
}
{code}

{code}
public  class SolrRequestContext {
  UserInfo; // Will contain user context from the authentication layer.
  HTTPRequest request;
  Enum OperationType; // Correlated with user roles.
  String[] CollectionsAccessed;
  String[] FieldsAccessed;
  String Resource;
}

{code}

{code}
public class SolrAuthorizationResponse {
  boolean authorized;

  public boolean isAuthorized();
}
{code}

User Roles: 
* Admin
* Collection Level:
  * Query
  * Update
  * Admin

Using this framework, an implementation could be written for specific security 
systems e.g. Apache Ranger or Sentry. It would keep all the security system 
specific code out of Solr.


 Pluggable authorization module in Solr
 --

 Key: SOLR-7275
 URL: https://issues.apache.org/jira/browse/SOLR-7275
 Project: Solr
  Issue Type: Sub-task
Reporter: Anshum Gupta
Assignee: Anshum Gupta

 Solr needs an interface that makes it easy for different authorization 
 systems to be plugged into it. Here's what I plan on doing:
 Define an interface {{SolrAuthorizationPlugin}} with one single method 
 {{isAuthorized}}. This would take in a {{SolrRequestContext}} object and 
 return an {{SolrAuthorizationResponse}} object. The object as of now would 
 only contain a single boolean value but in the future could contain more 
 information e.g. ACL for document filtering etc.
 The reason why we need a context object is so that the plugin doesn't need to 
 understand Solr's capabilities e.g. how to extract the name of the collection 
 or other information from the incoming request as there are multiple ways to 
 specify the target collection for a request. Similarly request type can be 
 specified by {{qt}} or {{/handler_name}}.
 Flow:
 Request - SolrDispatchFilter - isAuthorized(context) - Process/Return.
 {code}
 public interface SolrAuthorizationPlugin {
   public SolrAuthorizationResponse isAuthorized();
 }
 {code}
 {code}
 public  class SolrRequestContext {
   UserInfo; // Will contain user context from the 

[jira] [Commented] (SOLR-6816) Review SolrCloud Indexing Performance.

2015-03-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384600#comment-14384600
 ] 

Yonik Seeley commented on SOLR-6816:


bq.boolean updated = getUpdatedDocument(cmd, versionOnUpdate);

That's for atomic udpates... and the version passed is the version that was on 
the update itself, not the version of the last doc in the index.

It will be interesting to see if the code works w/o any tweaks... it's been 
sitting there since the beginning of solrcloud, for years unused and untested 
;-)
{code}
// if we aren't the leader, then we need to check that updates were 
not re-ordered
if (bucketVersion != 0  bucketVersion  versionOnUpdate) {
  // we're OK... this update has a version higher than anything 
we've seen
  // in this bucket so far, so we know that no reordering has yet 
occurred.
  bucket.updateHighest(versionOnUpdate);
{code}

 Review SolrCloud Indexing Performance.
 --

 Key: SOLR-6816
 URL: https://issues.apache.org/jira/browse/SOLR-6816
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Priority: Critical
 Attachments: SolrBench.pdf


 We have never really focused on indexing performance, just correctness and 
 low hanging fruit. We need to vet the performance and try to address any 
 holes.
 Note: A common report is that adding any replication is very slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-27 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-6339:
-
Attachment: LUCENE-6339.patch

Updated Patch:
 - SuggestScoreSocPQ prefers smaller doc id
 - documentation fixes

I will commit this shortly, Thanks for all the feedback [~mikemccand]  
[~simonw]

 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, 
 LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 PostingsFormat completionPostingsFormat = new 
 Completion50PostingsFormat();
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return completionPostingsFormat;
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 Hits are collected in descending order of the suggestion's weight 
 {code:java}
 // full options for TopSuggestDocs (TopDocs)
 TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
 // full options for Collector
 // note: only collects does not score
 void suggest(String field, CharSequence key, int num, Filter filter, 
 TopSuggestDocsCollector collector) 
 {code}
 h4. Analyzer
 *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
 suggest field only parameters. 
 {code:java}
 CompletionAnalyzer(Analyzer analyzer, boolean preserveSep, boolean 
 preservePositionIncrements, int maxGraphExpansions)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-27 Thread Areek Zillur (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384631#comment-14384631
 ] 

Areek Zillur commented on LUCENE-6339:
--

Hi [~thetaphi],
Thanks for the review!
If two documents do have the same suggestion for the same SuggestField, it will 
produce duplicates in terms of the suggestion, but because they are from two 
documents (different doc ids) they are not considered as duplicates.
Maybe we can add a boolean flag in the NRTSuggester to only collect unique 
suggestions, but then we will have to decide on which suggestion to throw out, 
as they are now tied to doc ids?


 [suggest] Near real time Document Suggester
 ---

 Key: LUCENE-6339
 URL: https://issues.apache.org/jira/browse/LUCENE-6339
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/search
Affects Versions: 5.0
Reporter: Areek Zillur
Assignee: Areek Zillur
 Fix For: 5.0

 Attachments: LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, 
 LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch


 The idea is to index documents with one or more *SuggestField*(s) and be able 
 to suggest documents with a *SuggestField* value that matches a given key.
 A SuggestField can be assigned a numeric weight to be used to score the 
 suggestion at query time.
 Document suggestion can be done on an indexed *SuggestField*. The document 
 suggester can filter out deleted documents in near real-time. The suggester 
 can filter out documents based on a Filter (note: may change to a non-scoring 
 query?) at query time.
 A custom postings format (CompletionPostingsFormat) is used to index 
 SuggestField(s) and perform document suggestions.
 h4. Usage
 {code:java}
   // hook up custom postings format
   // indexAnalyzer for SuggestField
   Analyzer analyzer = ...
   IndexWriterConfig config = new IndexWriterConfig(analyzer);
   Codec codec = new Lucene50Codec() {
 PostingsFormat completionPostingsFormat = new 
 Completion50PostingsFormat();
 @Override
 public PostingsFormat getPostingsFormatForField(String field) {
   if (isSuggestField(field)) {
 return completionPostingsFormat;
   }
   return super.getPostingsFormatForField(field);
 }
   };
   config.setCodec(codec);
   IndexWriter writer = new IndexWriter(dir, config);
   // index some documents with suggestions
   Document doc = new Document();
   doc.add(new SuggestField(suggest_title, title1, 2));
   doc.add(new SuggestField(suggest_name, name1, 3));
   writer.addDocument(doc)
   ...
   // open an nrt reader for the directory
   DirectoryReader reader = DirectoryReader.open(writer, false);
   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
   // queryAnalyzer will be used to analyze the query string
   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
 queryAnalyzer);
   
   // suggest 10 documents for titl on suggest_title field
   TopSuggestDocs suggest = indexSearcher.suggest(suggest_title, titl, 10);
 {code}
 h4. Indexing
 Index analyzer set through *IndexWriterConfig*
 {code:java}
 SuggestField(String name, String value, long weight) 
 {code}
 h4. Query
 Query analyzer set through *SuggestIndexSearcher*.
 Hits are collected in descending order of the suggestion's weight 
 {code:java}
 // full options for TopSuggestDocs (TopDocs)
 TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
 // full options for Collector
 // note: only collects does not score
 void suggest(String field, CharSequence key, int num, Filter filter, 
 TopSuggestDocsCollector collector) 
 {code}
 h4. Analyzer
 *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
 suggest field only parameters. 
 {code:java}
 CompletionAnalyzer(Analyzer analyzer, boolean preserveSep, boolean 
 preservePositionIncrements, int maxGraphExpansions)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >