[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11163 - Still Failing!

2014-09-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11163/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.OverseerStatusTest.testDistribSearch

Error Message:
reload the collection time out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: reload 
the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([86879FDE28975A5E:76111C65FC83A62]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.invokeCollectionApi(AbstractFullDistribZkTestBase.java:1775)
at 
org.apache.solr.cloud.OverseerStatusTest.doTest(OverseerStatusTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.ut

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1216: POMs out of sync

2014-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1216/

3 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=16828, name=Thread-3321, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=16828, name=Thread-3321, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)
at __randomizedtesting.SeedInfo.seed([5598A23A5D63E154]:0)


FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:
   1) Thread[id=16828, name=Thread-3321, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=16828, name=Thread-3321, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClien

[jira] [Created] (SOLR-6551) ConcurrentModificationException in UpdateLog

2014-09-22 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6551:


 Summary: ConcurrentModificationException in UpdateLog
 Key: SOLR-6551
 URL: https://issues.apache.org/jira/browse/SOLR-6551
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul


{code}
null:java.util.ConcurrentModificationException
   at java.util.LinkedList$ListItr.checkForComodification(Unknown Source)
   at java.util.LinkedList$ListItr.next(Unknown Source)
   at org.apache.solr.update.UpdateLog.getTotalLogsSize(UpdateLog.java:199)
   at 
org.apache.solr.update.DirectUpdateHandler2.getStatistics(DirectUpdateHandler2.java:871)
   at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:159)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 629 - Failure

2014-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/629/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMar

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_67) - Build # 11162 - Failure!

2014-09-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11162/
Java: 64bit/jdk1.7.0_67 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:43886, 
http://127.0.0.1:50542, http://127.0.0.1:59418]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:43886, http://127.0.0.1:50542, 
http://127.0.0.1:59418]
at 
__randomizedtesting.SeedInfo.seed([42F30BF3C4F4D937:C31585EBB3ABB90B]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired

Re: continuing gc when too many search threads

2014-09-22 Thread Li Li
I don't think it's not the problem of small heap. I have run it in a
machine with 24 core and 64GB memory. I am using default jvm
arguments. I found that it use about 10GB heap space and still cause
this problem. The total index is less than 3GB in disk.
I don't know why lucene create so much ConcurrentHashMap$HashEntry and
IdentityWeakReference. it seems lucene want to cache something using
WeakIdentityMap.

On Mon, Sep 22, 2014 at 9:17 PM, Shawn Heisey  wrote:
> On 9/22/2014 6:42 AM, Li Li wrote:
>>   I have an index of about 30 million short strings, the index size is
>> about 3GB in disk
>>  I have give jvm 5gb memory with default setting in ubuntu 12.04 of sun jdk 
>> 7.
>>  When I use 20 theads, it's ok. But If I run 30 threads. After a
>> while. The jvm is doing nothing but gc.
>
> When your JVM is doing constant GC, your heap isn't big enough to do the
> job it's being asked to do by the software.  Your options are to
> increase your heap size, or change how your program works so that it
> needs less heap.
>
> Right now, if you have the memory available, simply make your heap
> larger.  I'm not really sure how to reduce heap requirements for a
> custom Lucene program, although if any of these notes for Solr (which is
> of course a Lucene program) can be helpful to you, here they are:
>
> http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-09-22 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144161#comment-14144161
 ] 

Hoss Man commented on SOLR-6351:



I'm headed out of town for a week, but before i go -- while it's fresh in my 
head -- i wanted to post some notes on what the next steps need to be for this 
based on the current state of trunk (after the refactoring & cleanup done in 
recent issues like SOLR-6354 & SOLR-6507)

{panel:title=next steps}
*Single Node Pivot Tests...*

# aparently we've managed to go this far w/o any simple single-node pivot tests 
other then {{SolrExampleTests}} -- which requires solrj support.  Since it 
would be nice to start with some simple proof that single node pivots+stats 
work, we need to start iwth some tests
# we should add simple  {{FacetPivotSmallTest}} that uses the same basic data 
and assertions as {{DistributedFacetPivotSmallTest}} but with a single solr 
node and using xpath (instead of depending on solrj).

*Local Pivots + stats...*

# add some logic & a getter to {{StatsField}} to make public a list of the 
"tags" in it's local params
# add a {{Map>}} to {{StatsInfo}} to support a new 
method for looking up the {{List>}} corrisponding to a given tag 
string.
# Modify the Pivot facet code to check for a "stats" local param:
#* the value of which may be a comma seperated list of "tags" to lookup with 
the {{StatsInfo}} instance of the current {{ResponseBuilder}} to get the 
{{StatsField}} instances we want to hang of of our pivots.
#* if there are some {{StatsFields}} to hang off of our pivot, then any code in 
{{PivotFacetProcessor}} which currently calls {{getSubsetSize()}} should call 
{{getSubset()}}; and after *any* call (existing or new) to {{getSubset()}} the 
code should (in addition to adding the set size to the response) pass that 
DocSet to the {{StatsField.computeLocalStatsValues}} and include the resulting 
StatsValues in the response.
# update the previously created {{FacetPivotSmallTest}} to also test hanging 
some stats off of pivots

*SolrJ*

# update the SolrJ {{PivotField}} to support having a {{List}} 
in it
# update the solrj codecs to know how to populate those if/when the data exists 
in the response
# add some unit tests for this in solrj (no existing unit tests of the pivot or 
stats object creation from responses???)
# update {{SolrExampleTests}} to do some pivots+stats and verify that they can 
be parsed correctly by solrj

*Distributed Pivot + Stats*

# {{PivotFacetValue}} needs to know if/when it hsould have one or more 
{{StatsValues}} in it and get an empty instance from {{StatsValuesFactory}} for 
each of the applicable {{StatsField}} instances.
# {{PivotFacetValue.createFromNamedLists}} needs to recognize when a shard is 
including a a sub-NamedList of stats data, and for merge in each of those 
children into the appropriate {{StatsValues.accumulate(NamedList)}} (based on 
{{StatsField.getKey()}})
# at this point we should be able to update {{DistributedFacetPivotSmallTest}} 
to include the same types of pivot+stats additions that were made to 
{{FacetPivotSmallTest}} for checking the sngle node case, and see distributed 
pivot+stats working.

*Test, Test, Test*

# at this point we should be able to update the other distribute pivot tests 
with pivot + stats cases to make sure we don't find new bugs
# adding in stats params & assertions to {{DistributedFacetPivotLargeTest}} and 
{{DistributedFacetPivotLongTailTest}} should be straight forward
# {{TestCloudPivotFacet}} will be more interesting due to the randomization...
#* adding new randomized {{stats.field}} params is trival given all the 
interesting fields already included in the docs
#* with a little record keeping of what {{stats.field}} params we add, we can 
easily tweak the {{facet.pivot}} params to includes a {{stats=...} local param 
to ask for them
#* we'll want a trace param to to know if/when to expect stats in the response 
(so we don't overlook bugs where stats are never computed/returned)
#* in {{assertPivotCountsAreCorrect}}, if stats are expected, then instead of a 
simple {{assertNumFound}} on each of the pivot values, we can actaully assert 
that when filtering on that pivot value, the _stats_ we get back for the entire 
(filtered) request match the stats we got back as part of the pivot (ie: don't 
just check the {{pivot\[count\]==numFound}}, also check 
{{pivot\[stats\]\[foo\]\[min\]==stats\[foo\]\[min\]}} and 
{{pivot\[stats\]\[foo\]\[max\]==stats\[foo\]\[max\]}}, etc...
#** the merge logic should be exact for the min/max/missing/count stats, but we 
may need some leniency here for the comparisons of some of the computed stats 
values like sum/mean/stddev/etc... since the order of operations involved in 
the merge may cause intermediate precision loss
{panel}

One final note...

bq. we need to think carefully about "exclusions" because they might be 
different 

My

[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144144#comment-14144144
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1626922 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626922 ]

LUCENE-5969: move bumping default codec/dv/pf in tests to TestUtil methods, put 
blocktreeords in rotation

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 6.0
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144129#comment-14144129
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1626921 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1626921 ]

LUCENE-5969: move bumping default codec/dv/pf in tests to TestUtil methods, put 
blocktreeords in rotation

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 6.0
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-5972.

   Resolution: Fixed
Fix Version/s: 5.0
 Assignee: Ryan Ernst

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 5.0
>
> Attachments: LUCENE-5972.patch, LUCENE-5972.patch, LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144106#comment-14144106
 ] 

ASF subversion and git services commented on LUCENE-5972:
-

Commit 1626920 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626920 ]

LUCENE-5972: IndexFormatTooOldException and IndexFormatTooNewException now 
extend from IOException (merged 1626914)

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch, LUCENE-5972.patch, LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6541) Enhancement for SOLR-6452 StatsComponent "missing" stat won't work with docValues=true and indexed=false

2014-09-22 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144067#comment-14144067
 ] 

Hoss Man commented on SOLR-6541:



Setting aside point #2 & #3 for a moment (since they seem to be purely about 
API visibility)...

bq. 1. Accumulate methods should not return stats specific numbers (it is 
generic). ...

I'm not sure that i understand this sentence -- it seems to be refering to the 
DocValuesStats.accumSingle and DocValuesStats.accumMulti, and the fact that 
they return the number of docs "missing" values.  but i don't understand:

a) in what way are you saying these methods are "generic" ?  ... they aren't 
part of any API contract, they're just package private static methods.

b) how is the missing count a "stats specific number" ? ... at first glance, i 
thought you ment it was being accumulated in the AbstractStatsValues base class 
(or code tightly coupled with it) when really it should be associated with code 
in a specific subclass (similar to how subclasses like "NumericStatsValues" 
adds stats like "sum" and "mean" that don't make sense for string & enum 
fields) -- but even if that's hwat you ment, that doens't make sense given that 
"missing" is part of the AbstractStatsValues contract.

bq. 4. We don't need intermediate maps to accumulate missing counts. ...

At a glance this seems like a good idea to me -- but it seems to straight 
forward it makes me suspicious of why it wasn't done that way in the first 
place?

Looking at the comments in SOLR-6452, this seems to have been very deliberate...

bq. As an optimization, why don't you make the missingStats a Map and use the ords as keys instead of the terms. That way you don't need 
to do the lookupOrd for all docs, and you do it only once per term in the 
accumulateMissing() method. 


> Enhancement for SOLR-6452 StatsComponent "missing" stat won't work with 
> docValues=true and indexed=false
> 
>
> Key: SOLR-6541
> URL: https://issues.apache.org/jira/browse/SOLR-6541
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.10, Trunk
>Reporter: Vitaliy Zhovtyuk
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6541.patch
>
>
> This issue is refactoring of solution provided in SOLR-6452 StatsComponent 
> "missing" stat won't work with docValues=true and indexed=false.
> I think the following points need to be addressed:
> 1. Accumulate methods should not return stats specific numbers (it is 
> generic). Attached solution with container class. Also made them private 
> scoped.
> Returning just missing fields from accumulate methods does not allow you to 
> extend it with additional counts field, therefore i propose to leave void.
> 2. Reduced visibility of fields in FieldFacetStats.
> 3. Methods FieldFacetStats#accumulateMissing and 
> FieldFacetStats#accumulateTermNum does not throw any IO exception
> 4. We don't need intermediate maps to accumulate missing counts. Method  
> org.apache.solr.handler.component.FieldFacetStats#facetMissingNum 
> can be changed to work directly on StatsValues structure and removed 
> org.apache.solr.handler.component.FieldFacetStats#accumulateMissing. 
> We don't need to have it in 2 phases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143979#comment-14143979
 ] 

ASF subversion and git services commented on LUCENE-5972:
-

Commit 1626914 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1626914 ]

LUCENE-5972: IndexFormatTooOldException and IndexFormatTooNewException now 
extend from IOException

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch, LUCENE-5972.patch, LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143971#comment-14143971
 ] 

Robert Muir commented on LUCENE-5972:
-

Looks great, thank you!

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch, LUCENE-5972.patch, LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5972:
---
Attachment: LUCENE-5972.patch

Good ideas.   This patch adds removing the asserts, and Objects.toString called 
on message in CorruptIndexException.

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch, LUCENE-5972.patch, LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



How to openIfChanged the most recent merge?

2014-09-22 Thread Mikhail Khludnev
Hello!
I'm in trouble with Lucene Index Writer. I'm benchmarking some algorithm
which might seem like NRT-case, but I'm not sure that I need it
particularly. The overall problem is to writing "join index" (column holds
docnums) via updating binary docvalues after commit. i.e.:
 - update docs
 - commit
 - read docs (openIfChanged() before )
 - updateDocVals
 - commit

It's clunky but it works, until guess what happens... merge.Oh my.

Once a time I have segments
segments_ec:2090 _7c(5.0):C117/8:delGen=8:
_7j(5.0):C1:fieldInfosGen=1:dvGen=1 _7k(5.0):C1)

I apply one update and trigger commit, as a result I have:
segments_ee:2102 _7c(5.0):C117/9:delGen=9:..
_7k(5.0):C1:fieldInfosGen=1:dvGen=1 _7l(5.0):C1)

however, somewhere inside of the this commit call, pretty
SerialMergeScheduler bakes the single solid segment
_7m(5.0):C117
however, it wasn't exposed in via any segments file so far.

And now I get into trouble:
if I call DR.openIfChanged(segments_ec) (even after IW.waitMerges()), I've
got segments_ee that's fairly reasonable, to keep it incremental and fast.
but if I use that IndexWriter, it applies new updates on top of that
merged one (_7m(5.0):C117), not on segments_ee. And it broke my assertions.
I rather need to open reader of that merged _7m(5.0):C117, which IW keeps
somewhere internally, and it's better to do if fancy&incremental. If you
can point me on how NRT can solve I'd happy to switch on it.

Incredibly thank you for your time!!!

-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics





[jira] [Commented] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143857#comment-14143857
 ] 

Robert Muir commented on LUCENE-5972:
-

Looks good, can we remove this assert in both classes?

{code}
assert resourceDesc != null;
{code}

Such things do not belong in exceptions. Maybe while we are here, we can fix 
CorruptIndexException, to not throw NPE if its 'message' is null.

Currently it has:
{code}
super(message + " (resource=" + resourceDescription + ")", cause);
{code}

This can just be super(Objects.toString(message) + ...

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch, LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6354) Support stats over functions

2014-09-22 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6354.

   Resolution: Fixed
Fix Version/s: Trunk
   5.0
 Assignee: Hoss Man

> Support stats over functions
> 
>
> Key: SOLR-6354
> URL: https://issues.apache.org/jira/browse/SOLR-6354
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6354.patch, SOLR-6354.patch, SOLR-6354.patch, 
> TstStatsComponent.java
>
>
> The majority of the logic in StatsValuesFactory for dealing with stats over 
> fields just uses the ValueSource API.  There's very little reason we can't 
> generalize this to support computing aggregate stats over any arbitrary 
> function (or the scores from an arbitrary query).
> Example...
> {noformat}
> stats.field={!func key=mean_rating 
> mean=true}prod(user_rating,pow(editor_rating,2))
> {noformat}
> ...would mean that we can compute a conceptual "rating" for each doc by 
> multiplying the user_rating field by the square of the editor_rating field, 
> and then we'd compute the mean of that "rating" across all docs in the set 
> and return it as "mean_rating"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-09-22 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143856#comment-14143856
 ] 

Simon Willnauer commented on LUCENE-5911:
-

+1 to freeze

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5972:
---
Attachment: LUCENE-5972.patch

Ok, new patch, using {{Objects.toString}}, and adding back ctors taking 
{{DataInput}}

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch, LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6249) Schema API changes return success before all cores are updated

2014-09-22 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143846#comment-14143846
 ] 

Timothy Potter commented on SOLR-6249:
--

This mechanism is mainly a convenience for the client to not have to poll the 
zk version from all replicas themselves. If timeout occurs, then either A) one 
or more of the replicas couldn't process the update or B) one or more of the 
replicas was just being really slow. If A, then really the client app can't 
really proceed safely without resolving the root cause. Not really sure what to 
do about this without going down the path of having a distributed transaction 
that allows us to rollback updates if any replicas fail.

If B, the client can wait more, but then they would have to poll all the 
replicas themselves, which makes client's implement this same polling solution 
on their side as well, thus not very convenient.

One thing would could do to help make it more convenient for dealing with B and 
even possibly A is to use the solution I proposed for SOLR-6550 to pass back 
the URLs of the replicas that timed out using the extended exception metadata. 
That at least narrows the scope for the client but still inconvenient.

Alternatively, async would work but at some point, doesn't the client have to 
give up polling? Hence we're back to effectively having a timeout. I took this 
ticket to mean that a client doesn't want to proceed with more updates until it 
knows all cores have seen the current update, so async seems to just move the 
problem out to the client. 

I'm happy to implement the async approach but from where I sit now, I think we 
should build distributed 2-phase commit transaction support into managed schema 
as it will be useful going forward for managed config. That way, clients can 
make a change and then be certain it was either applied entirely or not at all 
and their cluster remains in a consistent state. This of course would only be 
applied to schema and config changes so I'm not talking about distributed 
transactions for Solr in general.

> Schema API changes return success before all cores are updated
> --
>
> Key: SOLR-6249
> URL: https://issues.apache.org/jira/browse/SOLR-6249
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Timothy Potter
> Attachments: SOLR-6249.patch, SOLR-6249.patch
>
>
> See SOLR-6137 for more details.
> The basic issue is that Schema API changes return success when the first core 
> is updated, but other cores asynchronously read the updated schema from 
> ZooKeeper.
> So a client application could make a Schema API change and then index some 
> documents based on the new schema that may fail on other nodes.
> Possible fixes:
> 1) Make the Schema API calls synchronous
> 2) Give the client some ability to track the state of the schema.  They can 
> already do this to a certain extent by checking the Schema API on all the 
> replicas and verifying that the field has been added, though this is pretty 
> cumbersome.  Maybe it makes more sense to do this sort of thing on the 
> collection level, i.e. Schema API changes return the zk version to the 
> client.  We add an API to return the current zk version.  On a replica, if 
> the zk version is >= the version the client has, the client knows that 
> replica has at least seen the schema change.  We could also provide an API to 
> do the distribution and checking across the different replicas of the 
> collection so that clients don't need ot do that themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143811#comment-14143811
 ] 

Robert Muir commented on LUCENE-5972:
-

Also the existing logic (calling in.toString) within these Exceptions 
themselves is bogus today, as is the bogus assert != null.

This stuff needs to be removed. The ctor taking datainput should just call 
Objects.toString() instead (see corruptindexexception), so that null will not 
corrupt this exception with some bogus NPE or assertionerror.

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143808#comment-14143808
 ] 

Robert Muir commented on LUCENE-5972:
-

Why are the constructors taking datainput removed? This forces calls to 
toString(), which will cause NPE in some cases (whereas before, it would just 
concatenate "null").

If things go corrupt, who knows what could go wrong, i think its best to avoid 
this.

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5962) patch creation tool (rename and improve diffSources.py)

2014-09-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-5962.

   Resolution: Fixed
Fix Version/s: 5.0
 Assignee: Ryan Ernst

> patch creation tool (rename and improve diffSources.py)
> ---
>
> Key: LUCENE-5962
> URL: https://issues.apache.org/jira/browse/LUCENE-5962
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 5.0
>
> Attachments: LUCENE-5962.patch
>
>
> The script {{diffSources.py}} is used for creating patches for feature 
> branches.  I think the name {{createPatch.py}} would be more apt.  It also 
> only works with certain file types and is one of the only python scripts 
> written for python 2.
> I'd like to rename this script, upgrade it to python 3, and fix it to work 
> with all files that git/svn would not ignore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5962) patch creation tool (rename and improve diffSources.py)

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143790#comment-14143790
 ] 

ASF subversion and git services commented on LUCENE-5962:
-

Commit 1626891 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626891 ]

LUCENE-5962: Rename diffSources.py to createPatch.py and make it work with all 
text file types (merged 1626890)

> patch creation tool (rename and improve diffSources.py)
> ---
>
> Key: LUCENE-5962
> URL: https://issues.apache.org/jira/browse/LUCENE-5962
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5962.patch
>
>
> The script {{diffSources.py}} is used for creating patches for feature 
> branches.  I think the name {{createPatch.py}} would be more apt.  It also 
> only works with certain file types and is one of the only python scripts 
> written for python 2.
> I'd like to rename this script, upgrade it to python 3, and fix it to work 
> with all files that git/svn would not ignore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5962) patch creation tool (rename and improve diffSources.py)

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143786#comment-14143786
 ] 

ASF subversion and git services commented on LUCENE-5962:
-

Commit 1626890 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1626890 ]

LUCENE-5962: Rename diffSources.py to createPatch.py and make work it work with 
all text file types

> patch creation tool (rename and improve diffSources.py)
> ---
>
> Key: LUCENE-5962
> URL: https://issues.apache.org/jira/browse/LUCENE-5962
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5962.patch
>
>
> The script {{diffSources.py}} is used for creating patches for feature 
> branches.  I think the name {{createPatch.py}} would be more apt.  It also 
> only works with certain file types and is one of the only python scripts 
> written for python 2.
> I'd like to rename this script, upgrade it to python 3, and fix it to work 
> with all files that git/svn would not ignore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4876 - Still Failing

2014-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4876/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:16704/xg_aah, http://127.0.0.1:16600/xg_aah, 
http://127.0.0.1:16380/xg_aah, http://127.0.0.1:16493/xg_aah, 
http://127.0.0.1:16295/xg_aah]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:16704/xg_aah, 
http://127.0.0.1:16600/xg_aah, http://127.0.0.1:16380/xg_aah, 
http://127.0.0.1:16493/xg_aah, http://127.0.0.1:16295/xg_aah]
at 
__randomizedtesting.SeedInfo.seed([E9D95A2C9664564D:683FD434E13B3671]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:171)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:144)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:88)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapte

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1806 - Still Failing!

2014-09-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1806/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:50107/_g/w, https://127.0.0.1:50113/_g/w, 
https://127.0.0.1:50099/_g/w, https://127.0.0.1:50110/_g/w, 
https://127.0.0.1:50104/_g/w]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:50107/_g/w, 
https://127.0.0.1:50113/_g/w, https://127.0.0.1:50099/_g/w, 
https://127.0.0.1:50110/_g/w, https://127.0.0.1:50104/_g/w]
at 
__randomizedtesting.SeedInfo.seed([57FD2DCD0F2B8E08:D61BA3D57874EE34]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:171)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:144)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:88)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ru

[jira] [Commented] (LUCENE-5931) DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given commit point has deletes/field updates

2014-09-22 Thread Vitaly Funstein (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143710#comment-14143710
 ] 

Vitaly Funstein commented on LUCENE-5931:
-

Michael,

Your updated patch definitely fixes the issue. But I just wanted to understand 
why deletes are so special, in that - if I don't have any buffered deletes for 
the segment, but new documents only, the reused reader instance won't pick them 
up, even without the fix in place. This is because liveDocs won't capture 
unflushed doc ids?

> DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given 
> commit point has deletes/field updates
> -
>
> Key: LUCENE-5931
> URL: https://issues.apache.org/jira/browse/LUCENE-5931
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6.1
>Reporter: Vitaly Funstein
>Assignee: Michael McCandless
>Priority: Critical
> Attachments: CommitReuseTest.java, LUCENE-5931.patch, 
> LUCENE-5931.patch, LUCENE-5931.patch, LUCENE-5931.patch
>
>
> {{StandardDirectoryReader}} assumes that the segments from commit point have 
> deletes, when they may not, yet the original SegmentReader for the segment 
> that we are trying to reuse does. This is evident when running attached JUnit 
> test case with asserts enabled (default): 
> {noformat}
> java.lang.AssertionError
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:188)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:326)
>   at 
> org.apache.lucene.index.StandardDirectoryReader$2.doBody(StandardDirectoryReader.java:320)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:702)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromCommit(StandardDirectoryReader.java:315)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:311)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:262)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:183)
> {noformat}
> or, if asserts are disabled then it falls through into NPE:
> {noformat}
> java.lang.NullPointerException
>   at java.io.File.(File.java:305)
>   at 
> org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80)
>   at 
> org.apache.lucene.codecs.lucene40.BitVector.(BitVector.java:327)
>   at 
> org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:90)
>   at org.apache.lucene.index.SegmentReader.(SegmentReader.java:131)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:194)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:326)
>   at 
> org.apache.lucene.index.StandardDirectoryReader$2.doBody(StandardDirectoryReader.java:320)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:702)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromCommit(StandardDirectoryReader.java:315)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:311)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:262)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:183)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-09-22 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143678#comment-14143678
 ] 

Timothy Potter commented on SOLR-6511:
--

I'm ready to commit this solution, but before I do, I'd like some feedback on 
SOLR-6550, which is how I'm implementing Alan's "no need, I'm the leader, I'll 
take it from here, thanks" recommendation.

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Attachments: SOLR-6511.patch, SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6550) Provide simple mechanism for passing additional metadata / context about a server-side SolrException back to the client-side

2014-09-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6550:
-
Attachment: SOLR-6550.patch

Patch for the approach I described above.

> Provide simple mechanism for passing additional metadata / context about a 
> server-side SolrException back to the client-side
> 
>
> Key: SOLR-6550
> URL: https://issues.apache.org/jira/browse/SOLR-6550
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6550.patch
>
>
> While trying to resolve SOLR-6511, it became apparent that I didn't have a 
> good way to convey more information about a particular error occurring on the 
> server-side using SolrException. The specific situation I encountered is a 
> replica took over as leader, but the previous leader wasn't aware of that yet 
> (due to a Zk session expiration). So when the previous leader (the one that 
> experienced the Zk session expiration) sent an update request with 
> FROMLEADER, the new leader rejected the request with a SolrException. 
> Ideally, we want the new leader to be able to say "you're not the leader 
> anymore" and for the previous leader to fail the request in a specific way; 
> see SOLR-6511 for more background on this scenario.
> My first inclination was to just extend SolrException and throw a 
> LeaderChangedException and have the client behave accordingly but then I 
> discovered that CUSS just takes the status code and error message and 
> reconstructs a new SolrException (on the client side). HttpSolrServer does 
> the same thing when creating a RemoteSolrException. So the fact that the 
> server-side throw a LeaderChangeException is basically lost in translation.
> I'm open to other suggestions but here's my approach so far:
> Add a {{NamedList metadata}} field to the SolrException class.
> If a server-side component wants to add additional context / metadata, then 
> it will call: {{solrExc.setMetadata("name", "value);}}
> When the response is being marshaled into the wire format, ResponseUtils will 
> include the metadata if available. On the client side, when the response is 
> processed, the metadata gets included into the new SolrException (in CUSS) or 
> RemoteSolrException (HttpSolrServer). It's up to the client to dig into the 
> metadata to take additional steps as I'll be doing in 
> DistributedUpdateProcessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6550) Provide simple mechanism for passing additional metadata / context about a server-side SolrException back to the client-side

2014-09-22 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-6550:


 Summary: Provide simple mechanism for passing additional metadata 
/ context about a server-side SolrException back to the client-side
 Key: SOLR-6550
 URL: https://issues.apache.org/jira/browse/SOLR-6550
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Timothy Potter
Assignee: Timothy Potter


While trying to resolve SOLR-6511, it became apparent that I didn't have a good 
way to convey more information about a particular error occurring on the 
server-side using SolrException. The specific situation I encountered is a 
replica took over as leader, but the previous leader wasn't aware of that yet 
(due to a Zk session expiration). So when the previous leader (the one that 
experienced the Zk session expiration) sent an update request with FROMLEADER, 
the new leader rejected the request with a SolrException. Ideally, we want the 
new leader to be able to say "you're not the leader anymore" and for the 
previous leader to fail the request in a specific way; see SOLR-6511 for more 
background on this scenario.

My first inclination was to just extend SolrException and throw a 
LeaderChangedException and have the client behave accordingly but then I 
discovered that CUSS just takes the status code and error message and 
reconstructs a new SolrException (on the client side). HttpSolrServer does the 
same thing when creating a RemoteSolrException. So the fact that the 
server-side throw a LeaderChangeException is basically lost in translation.

I'm open to other suggestions but here's my approach so far:

Add a {{NamedList metadata}} field to the SolrException class.
If a server-side component wants to add additional context / metadata, then it 
will call: {{solrExc.setMetadata("name", "value);}}

When the response is being marshaled into the wire format, ResponseUtils will 
include the metadata if available. On the client side, when the response is 
processed, the metadata gets included into the new SolrException (in CUSS) or 
RemoteSolrException (HttpSolrServer). It's up to the client to dig into the 
metadata to take additional steps as I'll be doing in 
DistributedUpdateProcessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian closed SOLR-6548.
---
Resolution: Not a Problem

> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this is not possible without modifying the solr core 
> code.
> All of the available plugin types and corresponding names are hard-coded in 
> the SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, 
> String name, InputSource is)" like:
> {code}
> loadPluginInfo(SolrRequestHandler.class,"requestHandler",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QParserPlugin.class,"queryParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
> {code}
> I propose adding these mappings to solrconfig.xml, or at least allow adding 
> additional plugin type mappings there.
> That way it would be easy to add new plugin types in the future, and users 
> could create their own custom plugin types as needed.  Also if the default 
> ones were solely defined in the list, they could then easily be overriden.
> At first glance it seems like the change to enable this could be relatively 
> simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
> has "plugin-type" entries each with a "name" and "class" sub-entry. 
> Then the constructor would read this list from the config  and then call the  
> loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143668#comment-14143668
 ] 

Brian commented on SOLR-6548:
-

I guess what I specifically wanted to do is already possible, though maybe it 
is overly complex.  I could declare plugins of a custom type as child nodes in 
the xml definition for a known type (e.g., request handler), so long as the 
classes implement SolrInfoMBean.  This is following what the highlighting 
component does - it has a number of different "fragmenter" types each with a 
different class, and these get loaded as children PluginInfo of the 
highlighting component PluginInfo.  So during SolrCoreAware inform they can be 
read and instantiated from the passed-in PluginInfo, e.g., 
info.getChildren("highlighting");

I guess maybe one downside is the child plug-ins couldn't easily be shared 
between different handlers.

I will close this issue - if needed a new one could always be opened. 

> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this is not possible without modifying the solr core 
> code.
> All of the available plugin types and corresponding names are hard-coded in 
> the SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, 
> String name, InputSource is)" like:
> {code}
> loadPluginInfo(SolrRequestHandler.class,"requestHandler",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QParserPlugin.class,"queryParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
> {code}
> I propose adding these mappings to solrconfig.xml, or at least allow adding 
> additional plugin type mappings there.
> That way it would be easy to add new plugin types in the future, and users 
> could create their own custom plugin types as needed.  Also if the default 
> ones were solely defined in the list, they could then easily be overriden.
> At first glance it seems like the change to enable this could be relatively 
> simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
> has "plugin-type" entries each with a "name" and "class" sub-entry. 
> Then the constructor would read this list from the config  and then call the  
> loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6354) Support stats over functions

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143660#comment-14143660
 ] 

ASF subversion and git services commented on SOLR-6354:
---

Commit 1626875 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626875 ]

SOLR-6354: stats.field can now be used to generate stats over the numeric 
results of arbitrary functions (merge r1626856)

> Support stats over functions
> 
>
> Key: SOLR-6354
> URL: https://issues.apache.org/jira/browse/SOLR-6354
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6354.patch, SOLR-6354.patch, SOLR-6354.patch, 
> TstStatsComponent.java
>
>
> The majority of the logic in StatsValuesFactory for dealing with stats over 
> fields just uses the ValueSource API.  There's very little reason we can't 
> generalize this to support computing aggregate stats over any arbitrary 
> function (or the scores from an arbitrary query).
> Example...
> {noformat}
> stats.field={!func key=mean_rating 
> mean=true}prod(user_rating,pow(editor_rating,2))
> {noformat}
> ...would mean that we can compute a conceptual "rating" for each doc by 
> multiplying the user_rating field by the square of the editor_rating field, 
> and then we'd compute the mean of that "rating" across all docs in the set 
> and return it as "mean_rating"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 635 - Still Failing

2014-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/635/

1 tests failed.
REGRESSION:  
org.apache.lucene.facet.TestRandomSamplingFacetsCollector.testRandomSampling

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([EB7A704156A4175F:162195CE7F3E0E8]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.facet.TestRandomSamplingFacetsCollector.testRandomSampling(TestRandomSamplingFacetsCollector.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 8590 lines...]
   [junit4] Suite: org.apache.lucene.facet.TestRandomSamplingFacetsCollector
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestRandomSamplingFacetsCollector -Dtests.method=testRandomSampling 
-Dtests.seed=EB7A704156A4175F -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene

[jira] [Updated] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5972:
---
Attachment: LUCENE-5972.patch

Patch.

> Index too old/new is not a corruption
> -
>
> Key: LUCENE-5972
> URL: https://issues.apache.org/jira/browse/LUCENE-5972
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5972.patch
>
>
> {{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
> from {{CorruptIndexException}}.  But this is not a corruption, it is simply 
> an unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6545) Query field list with wild card on dynamic field fails

2014-09-22 Thread Xu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Zhang updated SOLR-6545:
---
Attachment: SOLR-6545.patch

Upload a patch base on lucene_solr_4_10. 

Run through all tests, the new change doesn't break any other test.


> Query field list with wild card on dynamic field fails
> --
>
> Key: SOLR-6545
> URL: https://issues.apache.org/jira/browse/SOLR-6545
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10
> Environment: Mac OS X 10.9.5, Ubuntu 14.04.1 LTS
>Reporter: Burke Webster
>Priority: Critical
> Attachments: SOLR-6545.patch
>
>
> Downloaded 4.10.0, unpacked, and setup a solrcloud 2-node cluster by running: 
>   bin/solr -e cloud 
> Accepting all the default options and you will have a 2 node cloud running 
> with replication factor of 2.  
> Now add 2 documents by going to example/exampledocs, creating the following 
> file named my_test.xml:
> 
>  
>   1000
>   test 1
>   Text about test 1.
>   Category A
>  
>  
>   1001
>   test 2
>   Stuff about test 2.
>   Category B
>  
> 
> Then import these documents by running:
>   java -Durl=http://localhost:7574/solr/gettingstarted/update -jar post.jar 
> my_test.xml
> Verify the docs are there by hitting:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*
> Now run a query and ask for only the id and cat_*_s fields:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,cat_*
> You will only get the id fields back.  Change the query a little to include a 
> third field:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,name,cat_*
> You will now get the following exception:
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>   at 
> org.ecl

[jira] [Created] (LUCENE-5972) Index too old/new is not a corruption

2014-09-22 Thread Ryan Ernst (JIRA)
Ryan Ernst created LUCENE-5972:
--

 Summary: Index too old/new is not a corruption
 Key: LUCENE-5972
 URL: https://issues.apache.org/jira/browse/LUCENE-5972
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst


{{IndexFormatTooOldException}} and {{IndexFormatTooNewException}} both extend 
from {{CorruptIndexException}}.  But this is not a corruption, it is simply an 
unsupported version of an index.  They should just extend {{IOException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-09-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143635#comment-14143635
 ] 

Michael McCandless commented on LUCENE-5569:


+1

> Rename AtomicReader to LeafReader
> -
>
> Key: LUCENE-5569
> URL: https://issues.apache.org/jira/browse/LUCENE-5569
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: Trunk
>
> Attachments: LUCENE-5569.patch, LUCENE-5569.patch
>
>
> See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
> {{Atomic}}.
> Talking from my experience, I was a bit confused in the beginning that this 
> thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
> in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
> remove this confusion and also carry the information that these readers are 
> used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6028) SOLR returns 500 error code for query /<,/

2014-09-22 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6028:
---
Attachment: SOLR-6028.patch

Added catch and transformation to SolrException (BadRequest) that lead to HTTP 
400. The thing looks not really nice - IllegalArgumentException on parse 
problems in org.apache.lucene.search.RegexpQuery, shouldn't it be custom 
runtime exception.

> SOLR returns 500 error code for query /<,/
> --
>
> Key: SOLR-6028
> URL: https://issues.apache.org/jira/browse/SOLR-6028
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.7.1
>Reporter: Kingston Duffie
>Priority: Minor
> Attachments: SOLR-6028.patch
>
>
> If you enter the following query string into the SOLR admin console to 
> execute a query, you will get a 500 error:
> /<,/
> This is an invalid query -- in the sense that the field between the slashes 
> is not a valid regex.  Nevertheless, I would have expected to get a 400 error 
> rather than 500.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143619#comment-14143619
 ] 

Noble Paul commented on SOLR-6307:
--

I still feel #2 is the right way to do it

> Atomic update remove does not work for int array or date array
> --
>
> Key: SOLR-6307
> URL: https://issues.apache.org/jira/browse/SOLR-6307
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.9
>Reporter: Kun Xi
>  Labels: atomic, difficulty-medium, impact-medium
>
> Try to remove an element in the string array with curl:
> {code}
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{ "attr_birth_year_is": { "remove": 
> [1960]},  "id": 1098}]'
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{"reserved_on_dates_dts": {"remove": 
> ["2014-02-12T12:00:00Z", "2014-07-16T12:00:00Z", "2014-02-15T12:00:00Z", 
> "2014-02-21T12:00:00Z"]}, "id": 1098}]'
> {code}
> Neither of them works.
> The set and add operation for int array works. 
> The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1844 - Failure!

2014-09-22 Thread Robert Muir
Thank you

On Mon, Sep 22, 2014 at 1:19 PM, Michael McCandless
 wrote:
> I'll fix ... need to suppress SimpleText...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Mon, Sep 22, 2014 at 1:07 PM, Policeman Jenkins Server
>  wrote:
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1844/
>> Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC
>>
>> 15 tests failed.
>> REGRESSION:  org.apache.lucene.search.TestSortedSetSelector.testMiddleMax
>>
>> Error Message:
>> codec does not support random access ordinals, cannot use selector: 
>> MIDDLE_MAX
>>
>> Stack Trace:
>> java.lang.UnsupportedOperationException: codec does not support random 
>> access ordinals, cannot use selector: MIDDLE_MAX
>> at 
>> __randomizedtesting.SeedInfo.seed([7104C11261397844:4A10894851F62D02]:0)
>> at 
>> org.apache.lucene.search.SortedSetSelector.wrap(SortedSetSelector.java:80)
>> at 
>> org.apache.lucene.search.SortedSetSortField$1.getSortedDocValues(SortedSetSortField.java:129)
>> at 
>> org.apache.lucene.search.FieldComparator$TermOrdValComparator.setNextReader(FieldComparator.java:828)
>> at 
>> org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.doSetNextReader(TopFieldCollector.java:97)
>> at 
>> org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33)
>> at 
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:605)
>> at 
>> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:94)
>> at 
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:573)
>> at 
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:525)
>> at 
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:502)
>> at 
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:370)
>> at 
>> org.apache.lucene.search.TestSortedSetSelector.testMiddleMax(TestSortedSetSelector.java:384)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
>> at 
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>> at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>> at 
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>> at 
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>> at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>> at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>> at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
>> at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
>> at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
>> at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>> at 
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>> at 
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>> at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>> at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOn

RE: Help with `ant beast`

2014-09-22 Thread Uwe Schindler
I am fine with any solution!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
> Sent: Monday, September 22, 2014 6:16 PM
> To: dev@lucene.apache.org
> Subject: RE: Help with `ant beast`
> 
> 
> : I would prefer to change the parent modules to have an explicit target. See
> my other message.
> 
> you're missing the point -- you can do both.
> 
> Add an explicit top level beast target w/whatever message you think is best.
> but in general, if you've got a target that has preconditions, and you don't 
> like
> the message that target fails with when the preconditions aren't met, then
> add an explicit check of those preconditions and fail with a better message.
> 
> yes junit.classpath is an implementation detail, but it's a detail of the same
> target where you can add this  ... whatever the implementation details
> are, you can add the same check.
> 
> that way you are protecting hte target (and it's error messages) from any
> possible overside in forgetting to add an explicit "override" with a helpful
> error, or if the build.xml files are refactored, or if the directories are 
> moved
> arround, etc...
> 
> :
> : -
> : Uwe Schindler
> : H.-H.-Meier-Allee 63, D-28213 Bremen
> : http://www.thetaphi.de
> : eMail: u...@thetaphi.de
> :
> :
> : > -Original Message-
> : > From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
> : > Sent: Friday, September 19, 2014 6:56 PM
> : > To: dev@lucene.apache.org
> : > Subject: RE: Help with `ant beast`
> : >
> : > :
> : > : This is correct! Maybe we can improve the error message, but this is not
> : > : so easy... What is the best way to detect if a build file is a parent
> : > : one? Of course we could add a dummy target to all parent build files -
> : > : "ant test" has this to delegate to subant builds, but we don’t want to
> : > : do this here.
> : >
> : > can we just add something like this inside the "beast" target?...
> : >
> : >
> : > 
> : >   
> : > 
> : >   
> : > 
> : >
> : >
> : >
> : > :
> : > : Uwe
> : > :
> : > : -
> : > : Uwe Schindler
> : > : H.-H.-Meier-Allee 63, D-28213 Bremen
> : > : http://www.thetaphi.de
> : > : eMail: u...@thetaphi.de
> : > :
> : > :
> : > : > -Original Message-
> : > : > From: Steve Rowe [mailto:sar...@gmail.com]
> : > : > Sent: Friday, September 19, 2014 5:54 PM
> : > : > To: dev@lucene.apache.org
> : > : > Subject: Re: Help with `ant beast`
> : > : >
> : > : > I think ‘ant beast’ only works in the directory of the module 
> containing
> the
> : > : > test, not at a higher level.
> : > : >
> : > : > On Sep 19, 2014, at 11:45 AM, Ramkumar R. Aiyengar
> : > : >  wrote:
> : > : >
> : > : > > I am trying to use `ant beast` on trunk (per the recommendation in
> test-
> : > : > help) and getting this error:
> : > : > >
> : > : > > ~/lucene-solr/lucene> ant beast -Dbeast.iters=10 -Dtests.dups=6
> : > : > > -Dtestcase=TestBytesStore
> : > : > >
> : > : > > -beast:
> : > : > >   [beaster] Beast round: 1
> : > : > >
> : > : > > BUILD FAILED
> : > : > > ~/lucene-solr/lucene/common-build.xml:1363: The following error
> : > : > occurred while executing this line:
> : > : > > ~/lucene-solr/lucene/common-build.xml:1358: The following error
> : > : > occurred while executing this line:
> : > : > > ~/lucene-solr/lucene/common-build.xml:961: Reference
> junit.classpath
> : > : > not found.
> : > : > >
> : > : > > `ant test` works just fine. Any idea where the problem might be?
> : > : >
> : > : >
> : > : > -
> : > : > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
> : > additional
> : > : > commands, e-mail: dev-h...@lucene.apache.org
> : > :
> : > :
> : > : -
> : > : To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> : > : For additional commands, e-mail: dev-h...@lucene.apache.org
> : > :
> : > :
> : >
> : > -Hoss
> : > http://www.lucidworks.com/
> :
> :
> : -
> : To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> : For additional commands, e-mail: dev-h...@lucene.apache.org
> :
> :
> 
> -Hoss
> http://www.lucidworks.com/


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-09-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5569:
---
Attachment: LUCENE-5569.patch

Here is a fresh patch against current trunk.  I think we should get this in for 
5.0 since branch_5x has been created now.

> Rename AtomicReader to LeafReader
> -
>
> Key: LUCENE-5569
> URL: https://issues.apache.org/jira/browse/LUCENE-5569
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: Trunk
>
> Attachments: LUCENE-5569.patch, LUCENE-5569.patch
>
>
> See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
> {{Atomic}}.
> Talking from my experience, I was a bit confused in the beginning that this 
> thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
> in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
> remove this confusion and also carry the information that these readers are 
> used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6354) Support stats over functions

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143512#comment-14143512
 ] 

ASF subversion and git services commented on SOLR-6354:
---

Commit 1626856 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1626856 ]

SOLR-6354: stats.field can now be used to generate stats over the numeric 
results of arbitrary functions

> Support stats over functions
> 
>
> Key: SOLR-6354
> URL: https://issues.apache.org/jira/browse/SOLR-6354
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6354.patch, SOLR-6354.patch, SOLR-6354.patch, 
> TstStatsComponent.java
>
>
> The majority of the logic in StatsValuesFactory for dealing with stats over 
> fields just uses the ValueSource API.  There's very little reason we can't 
> generalize this to support computing aggregate stats over any arbitrary 
> function (or the scores from an arbitrary query).
> Example...
> {noformat}
> stats.field={!func key=mean_rating 
> mean=true}prod(user_rating,pow(editor_rating,2))
> {noformat}
> ...would mean that we can compute a conceptual "rating" for each doc by 
> multiplying the user_rating field by the square of the editor_rating field, 
> and then we'd compute the mean of that "rating" across all docs in the set 
> and return it as "mean_rating"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6071) Make solr install like other databases

2014-09-22 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike closed SOLR-6071.
--
   Resolution: Fixed
Fix Version/s: 4.10.1

Looks like some serious progress. Thanks so much for picking this up.

> Make solr install like other databases
> --
>
> Key: SOLR-6071
> URL: https://issues.apache.org/jira/browse/SOLR-6071
> Project: Solr
>  Issue Type: New Feature
>  Components: scripts and tools
>Affects Versions: Trunk
> Environment: Ubuntu
>Reporter: Mike
> Fix For: 4.10.1
>
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> It's long past time that Solr should have proper startup and log scripts. 
> There are a number of reasons and much evidence why we should start including 
> them:
> 1. In Solr-4792 we removed the war file from the distribution making it 
> easier than ever before to set solr up with these scripts.
> 2. The StackOverflow question on this topic has been viewed more than 34k 
> times and has several differing answers.
> 3. For non-java developers, figuring out the right way to start and daemonize 
> Solr isn't obvious. Right now, my installation has a number of java flags 
> that I've accumulated over the years (-jar means what? -server is only needed 
> on 32 bit machines? -xMX huh?) This leads to varied deployments and 
> inconsistencies that common scripts could help alleviate.
> 4. Anecdotally I've heard endless bashing of Solr because it's such a pain to 
> get set up. 
> 5. Solr is unlike any other database I know in the grittiness of starting it 
> up.
> 6. Not having these scripts makes Solr look less polished than it would 
> otherwise.
> We discussed this on IRC a bit yesterday and there didn't seem to be any 
> opposition to doing this. Consensus seemed to be simply that it hadn't been 
> done...yet.
> I am not an expert on these things, but I think we should get something put 
> together for Solr 5, if there's time. Hopefully this thread can get the ball 
> rolling -- I didn't see any previous discussion anywhere. Apologies if I 
> missed it. 
> This would be a great improvement to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5971) Separate backcompat creation script from adding version

2014-09-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5971:
---
Attachment: LUCENE-5971.patch

This patch does the following:
* Removes CreateBackwardsCompatibilityIndex class, which was just to avoid 
tests from running the index creation methods.  Instead, it uses assumeTrue 
methods, and requires passing -Dtests.bwcdir when creating backcompat indexes.
* Renames all backcompat index files to contain the exact toString of their 
version, instead of a squash/minified variant.  This allows using {{Version}} 
parsing and comparison for checking all versions have a test (and for future 
tests, if needed as was in the past when older codecs did not support certain 
features).
* Adds {{addBackcompatIndexes.py}} script, which was extracted from 
{{bumpVersion.py}}
* Tweaks {{TestBackwardsCompatibility.testAllVersionsTested}} to allow a single 
"missing" test, which could be when we are in the middle of releasing a new 
version (so tests don't break as soon as a new constant is added to {{Version}})

> Separate backcompat creation script from adding version
> ---
>
> Key: LUCENE-5971
> URL: https://issues.apache.org/jira/browse/LUCENE-5971
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-5971.patch
>
>
> The recently created {{bumpVersion.py}} attempts to create a new backcompat 
> index if the default codec has changed.  However, we now want to create a 
> backcompat index for every released version, instead of just when there is a 
> change to the default codec.
> We should have a separate script which creates the backcompat indexes.  It 
> can even work directly on the released artifacts (by pulling down from 
> mirrors once released), so that there is no possibility for generating the 
> index from an incorrect svn/git checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5833) Suggestor Version 2 doesn't support multiValued fields

2014-09-22 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated LUCENE-5833:
--
Attachment: LUCENE-5833.patch

HI [~mikemccand], 

Thanks for reviewing. Here is a new patch which hangs on to the StorableField  
. This patch also folds in your other suggestions.

> Suggestor Version 2 doesn't support multiValued fields
> --
>
> Key: LUCENE-5833
> URL: https://issues.apache.org/jira/browse/LUCENE-5833
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/other
>Affects Versions: 4.8.1
>Reporter: Greg Harris
>Assignee: Steve Rowe
> Attachments: LUCENE-5833.patch, LUCENE-5833.patch, SOLR-6210.patch
>
>
> So if you use a multiValued field in the new suggestor it will not pick up 
> terms for any term after the first one. So it treats the first term as the 
> only term it will make it's dictionary from. 
> This is the suggestor I'm talking about:
> https://issues.apache.org/jira/browse/SOLR-5378



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5971) Separate backcompat creation script from adding version

2014-09-22 Thread Ryan Ernst (JIRA)
Ryan Ernst created LUCENE-5971:
--

 Summary: Separate backcompat creation script from adding version
 Key: LUCENE-5971
 URL: https://issues.apache.org/jira/browse/LUCENE-5971
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst


The recently created {{bumpVersion.py}} attempts to create a new backcompat 
index if the default codec has changed.  However, we now want to create a 
backcompat index for every released version, instead of just when there is a 
change to the default codec.

We should have a separate script which creates the backcompat indexes.  It can 
even work directly on the released artifacts (by pulling down from mirrors once 
released), so that there is no possibility for generating the index from an 
incorrect svn/git checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6484) SolrCLI's healthcheck action needs to check live nodes as part of reporting the status of a replica

2014-09-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6484:
-
Component/s: (was: SolrCloud)
 scripts and tools

> SolrCLI's healthcheck action needs to check live nodes as part of reporting 
> the status of a replica
> ---
>
> Key: SOLR-6484
> URL: https://issues.apache.org/jira/browse/SOLR-6484
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6484.patch
>
>
> Same basic issue as SOLR-6481



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6549) bin/solr script should support a -s option to set the -Dsolr.solr.home property

2014-09-22 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-6549:


 Summary: bin/solr script should support a -s option to set the 
-Dsolr.solr.home property
 Key: SOLR-6549
 URL: https://issues.apache.org/jira/browse/SOLR-6549
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter


The bin/solr script supports a -d parameter for specifying the directory 
containing the webapp, resources, etc, lib ... In most cases, these binaries 
are reusable (and will eventually be in a server directory SOLR-3619) even if 
you want to have multiple solr.solr.home directories on the same server. In 
other words, it is more common/better to do:

{code}
bin/solr start -d server -s home1
bin/solr start -d server -s home2
{code}

than to do:

{code}
bin/solr start -d server1
bin/solr start -d server2
{code}

Basically, the start script needs to support a -s option that allows you to 
share binaries but have different Solr home directories for running multiple 
Solr instances on the same host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6529) Stop command in the start scripts should only stop the instance that it had started

2014-09-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6529:


Assignee: Timothy Potter

> Stop command in the start scripts should only stop the instance that it had 
> started
> ---
>
> Key: SOLR-6529
> URL: https://issues.apache.org/jira/browse/SOLR-6529
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Timothy Potter
>
> Currently the stop command looks for all running Solr instances and stops 
> those. I feel this is a bit dangerous.
> When starting a process with the start command we could write out a pid file 
> of the solr process that it started. Then the stop script should stop that 
> process.
> It could error out if the pid file is not present.
> We could still keep the feature of stopping all solr nodes by passing passing 
> -all ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6071) Make solr install like other databases

2014-09-22 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143472#comment-14143472
 ] 

Timothy Potter commented on SOLR-6071:
--

I think I've addressed all these issues in the bin/solr script provided by 
SOLR-3617. Please close out if you agree.

> Make solr install like other databases
> --
>
> Key: SOLR-6071
> URL: https://issues.apache.org/jira/browse/SOLR-6071
> Project: Solr
>  Issue Type: New Feature
>  Components: scripts and tools
>Affects Versions: Trunk
> Environment: Ubuntu
>Reporter: Mike
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> It's long past time that Solr should have proper startup and log scripts. 
> There are a number of reasons and much evidence why we should start including 
> them:
> 1. In Solr-4792 we removed the war file from the distribution making it 
> easier than ever before to set solr up with these scripts.
> 2. The StackOverflow question on this topic has been viewed more than 34k 
> times and has several differing answers.
> 3. For non-java developers, figuring out the right way to start and daemonize 
> Solr isn't obvious. Right now, my installation has a number of java flags 
> that I've accumulated over the years (-jar means what? -server is only needed 
> on 32 bit machines? -xMX huh?) This leads to varied deployments and 
> inconsistencies that common scripts could help alleviate.
> 4. Anecdotally I've heard endless bashing of Solr because it's such a pain to 
> get set up. 
> 5. Solr is unlike any other database I know in the grittiness of starting it 
> up.
> 6. Not having these scripts makes Solr look less polished than it would 
> otherwise.
> We discussed this on IRC a bit yesterday and there didn't seem to be any 
> opposition to doing this. Consensus seemed to be simply that it hadn't been 
> done...yet.
> I am not an expert on these things, but I think we should get something put 
> together for Solr 5, if there's time. Hopefully this thread can get the ball 
> rolling -- I didn't see any previous discussion anywhere. Apologies if I 
> missed it. 
> This would be a great improvement to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6509) Solr start scripts interactive mode doesn't honor -z argument

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143464#comment-14143464
 ] 

ASF subversion and git services commented on SOLR-6509:
---

Commit 1626849 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626849 ]

SOLR-6509: Solr start scripts interactive mode doesn't honor -z argument

> Solr start scripts interactive mode doesn't honor -z argument
> -
>
> Key: SOLR-6509
> URL: https://issues.apache.org/jira/browse/SOLR-6509
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6509.patch
>
>
> The solr start script ignore -z parameter when combined with -e cloud 
> (interactive cloud mode).
> {code}
> ./bin/solr -z localhost:2181 -e cloud
> Welcome to the SolrCloud example!
> This interactive session will help you launch a SolrCloud cluster on your 
> local workstation.
> To begin, how many Solr nodes would you like to run in your local cluster? 
> (specify 1-4 nodes) [2] 1
> Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
> Please enter the port for node1 [8983] 
> 8983
> Cloning /home/shalin/programs/solr-4.10.1/example into 
> /home/shalin/programs/solr-4.10.1/node1
> Starting up SolrCloud node1 on port 8983 using command:
> solr start -cloud -d node1 -p 8983 
> Waiting to see Solr listening on port 8983 [-]  
> Started Solr server on port 8983 (pid=27291). Happy searching!
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6509) Solr start scripts interactive mode doesn't honor -z argument

2014-09-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6509.
--
   Resolution: Fixed
Fix Version/s: 5.0

> Solr start scripts interactive mode doesn't honor -z argument
> -
>
> Key: SOLR-6509
> URL: https://issues.apache.org/jira/browse/SOLR-6509
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6509.patch
>
>
> The solr start script ignore -z parameter when combined with -e cloud 
> (interactive cloud mode).
> {code}
> ./bin/solr -z localhost:2181 -e cloud
> Welcome to the SolrCloud example!
> This interactive session will help you launch a SolrCloud cluster on your 
> local workstation.
> To begin, how many Solr nodes would you like to run in your local cluster? 
> (specify 1-4 nodes) [2] 1
> Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
> Please enter the port for node1 [8983] 
> 8983
> Cloning /home/shalin/programs/solr-4.10.1/example into 
> /home/shalin/programs/solr-4.10.1/node1
> Starting up SolrCloud node1 on port 8983 using command:
> solr start -cloud -d node1 -p 8983 
> Waiting to see Solr listening on port 8983 [-]  
> Started Solr server on port 8983 (pid=27291). Happy searching!
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6509) Solr start scripts interactive mode doesn't honor -z argument

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143462#comment-14143462
 ] 

ASF subversion and git services commented on SOLR-6509:
---

Commit 1626847 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1626847 ]

SOLR-6509: Solr start scripts interactive mode doesn't honor -z argument

> Solr start scripts interactive mode doesn't honor -z argument
> -
>
> Key: SOLR-6509
> URL: https://issues.apache.org/jira/browse/SOLR-6509
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
> Attachments: SOLR-6509.patch
>
>
> The solr start script ignore -z parameter when combined with -e cloud 
> (interactive cloud mode).
> {code}
> ./bin/solr -z localhost:2181 -e cloud
> Welcome to the SolrCloud example!
> This interactive session will help you launch a SolrCloud cluster on your 
> local workstation.
> To begin, how many Solr nodes would you like to run in your local cluster? 
> (specify 1-4 nodes) [2] 1
> Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
> Please enter the port for node1 [8983] 
> 8983
> Cloning /home/shalin/programs/solr-4.10.1/example into 
> /home/shalin/programs/solr-4.10.1/node1
> Starting up SolrCloud node1 on port 8983 using command:
> solr start -cloud -d node1 -p 8983 
> Waiting to see Solr listening on port 8983 [-]  
> Started Solr server on port 8983 (pid=27291). Happy searching!
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2127 - Still Failing

2014-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2127/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=8593, 
name=Thread-3068, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup]  
   at java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=8593, name=Thread-3068, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)
at __randomizedtesting.SeedInfo.seed([11AC42FC98E36A70]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=8593, name=Thread-3068, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=8593, name=Thread-3068, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.

Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1844 - Failure!

2014-09-22 Thread Michael McCandless
I'll fix ... need to suppress SimpleText...

Mike McCandless

http://blog.mikemccandless.com


On Mon, Sep 22, 2014 at 1:07 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1844/
> Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC
>
> 15 tests failed.
> REGRESSION:  org.apache.lucene.search.TestSortedSetSelector.testMiddleMax
>
> Error Message:
> codec does not support random access ordinals, cannot use selector: MIDDLE_MAX
>
> Stack Trace:
> java.lang.UnsupportedOperationException: codec does not support random access 
> ordinals, cannot use selector: MIDDLE_MAX
> at 
> __randomizedtesting.SeedInfo.seed([7104C11261397844:4A10894851F62D02]:0)
> at 
> org.apache.lucene.search.SortedSetSelector.wrap(SortedSetSelector.java:80)
> at 
> org.apache.lucene.search.SortedSetSortField$1.getSortedDocValues(SortedSetSortField.java:129)
> at 
> org.apache.lucene.search.FieldComparator$TermOrdValComparator.setNextReader(FieldComparator.java:828)
> at 
> org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.doSetNextReader(TopFieldCollector.java:97)
> at 
> org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:605)
> at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:94)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:573)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:525)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:502)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:370)
> at 
> org.apache.lucene.search.TestSortedSetSelector.testMiddleMax(TestSortedSetSelector.java:384)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> 

[jira] [Comment Edited] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143437#comment-14143437
 ] 

Brian edited comment on SOLR-6548 at 9/22/14 5:14 PM:
--

A potential alternative that would require more change would be to genericize 
the SolrCore plugin initialization, instead of having a separate registry and 
corresponding look-up method for each type, make it so it was a registry for 
all types of plugins that need to be instantiated, and have a common look-up 
method.


was (Author: brian44):
A potential alternative that would require more change would be to genericize 
the SolrCore plugin initialization, instead of having it a registry and 
corresponding look-up method for each type, make it so it was a registry for 
all types of plugins that need to be instantiated, and have a common look-up 
method.

> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this is not possible without modifying the solr core 
> code.
> All of the available plugin types and corresponding names are hard-coded in 
> the SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, 
> String name, InputSource is)" like:
> {code}
> loadPluginInfo(SolrRequestHandler.class,"requestHandler",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QParserPlugin.class,"queryParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
> {code}
> I propose adding these mappings to solrconfig.xml, or at least allow adding 
> additional plugin type mappings there.
> That way it would be easy to add new plugin types in the future, and users 
> could create their own custom plugin types as needed.  Also if the default 
> ones were solely defined in the list, they could then easily be overriden.
> At first glance it seems like the change to enable this could be relatively 
> simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
> has "plugin-type" entries each with a "name" and "class" sub-entry. 
> Then the constructor would read this list from the config  and then call the  
> loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143437#comment-14143437
 ] 

Brian commented on SOLR-6548:
-

A potential alternative that would require more change would be to genericize 
the SolrCore plugin initialization, instead of having it a registry and 
corresponding look-up method for each type, make it so it was a registry for 
all types of plugins that need to be instantiated, and have a common look-up 
method.

> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this is not possible without modifying the solr core 
> code.
> All of the available plugin types and corresponding names are hard-coded in 
> the SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, 
> String name, InputSource is)" like:
> {code}
> loadPluginInfo(SolrRequestHandler.class,"requestHandler",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QParserPlugin.class,"queryParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
> {code}
> I propose adding these mappings to solrconfig.xml, or at least allow adding 
> additional plugin type mappings there.
> That way it would be easy to add new plugin types in the future, and users 
> could create their own custom plugin types as needed.  Also if the default 
> ones were solely defined in the list, they could then easily be overriden.
> At first glance it seems like the change to enable this could be relatively 
> simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
> has "plugin-type" entries each with a "name" and "class" sub-entry. 
> Then the constructor would read this list from the config  and then call the  
> loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1844 - Failure!

2014-09-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1844/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

15 tests failed.
REGRESSION:  org.apache.lucene.search.TestSortedSetSelector.testMiddleMax

Error Message:
codec does not support random access ordinals, cannot use selector: MIDDLE_MAX

Stack Trace:
java.lang.UnsupportedOperationException: codec does not support random access 
ordinals, cannot use selector: MIDDLE_MAX
at 
__randomizedtesting.SeedInfo.seed([7104C11261397844:4A10894851F62D02]:0)
at 
org.apache.lucene.search.SortedSetSelector.wrap(SortedSetSelector.java:80)
at 
org.apache.lucene.search.SortedSetSortField$1.getSortedDocValues(SortedSetSortField.java:129)
at 
org.apache.lucene.search.FieldComparator$TermOrdValComparator.setNextReader(FieldComparator.java:828)
at 
org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.doSetNextReader(TopFieldCollector.java:97)
at 
org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:605)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:94)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:573)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:525)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:502)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:370)
at 
org.apache.lucene.search.TestSortedSetSelector.testMiddleMax(TestSortedSetSelector.java:384)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.

[jira] [Comment Edited] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143428#comment-14143428
 ] 

Brian edited comment on SOLR-6548 at 9/22/14 5:05 PM:
--

I guess it may not be as simple as I thought at first glance.

As of right now with only that change, for new custom types, it seems like the 
code for initializing them would have to go in the initialization code of the 
component that uses them.  I.e., it would be along the same lines as the code 
used in SolrCore to initialize the different components, e.g., "initQParsers()" 
in SolrCore gets all the plugin classes found in the SolrConfig having the 
QParser type and instantiates each.  


Here's one way it could work:
For my custom request handler that makes use of the custom plugin type, in a 
SolrCoreAware inform method, it would get all the plugin classes defined in the 
SolrConfig extending the QueryGenerator plugin type, and based on which are 
specified in its parameters, it would instantiate an object of each 
QueryGenerator class as needed.  So the component that uses the custom plugin 
type could be responsible for instantiating and using them (the code could live 
there), which would still allow adding new types without any code changes to 
that component itself.

I.e., say I wanted to add a new query generator type "topic_based_generator", 
then I would define "topic_based_generator" with a particular QueryGenerator 
class in solrconfig.xml with the class contained in a new jar I added to the 
library directory, like we do for custom query parsers.  The pluginInfo would 
get loaded by the SolrConfig constructor.  Then in the request handler 
definition in solrconfig.xml, I would list that type name 
"topic_based_generator" as one type that handler uses to generate queries.  
Then in the initialization for the handler it would instantiate the correct 
QueryGenerator class that "topic_based_generator" mapped to.

I.e., the handler would check info.name (from PluginInfo info) for the matching 
QueryGenerator PlugInfos it got from the SolrConfig for the one matching 
"topic_based_generator" then call solrCore.createInitInstance for that class to 
get the generator itself.




was (Author: brian44):
I guess it may not be as simple as I thought at first glance.

As of right now with only that change, for new custom types, it seems like the 
code for initializing them would have to go in the initialization code of the 
component that uses them.  I.e., it would be along the same lines as the code 
used in SolrCore to initialize the different components, e.g., "initQParsers()" 
in SolrCore gets all the plugin classes found in the SolrConfig having the 
QParser type and instantiates each.  


Here's one way it could work:
For my custom request handler that makes use of the custom plugin type, in a 
SolrCoreAware inform method, it would get all the plugin classes defined in the 
SolrConfig extending the QueryGenerator plugin type, and based on which are 
specified in its parameters, it would instantiate an object of each 
QueryGenerator class as needed.  So the component that uses the custom plugin 
type could be responsible for instantiating and using them (the code could live 
there), which would still allow adding new types without any code changes to 
that component itself.

I.e., say I wanted to add a new query generator type "topic_based_generator", 
then I would define "topic_based_generator" with a particular QueryGenerator 
class in solrconfig.xml contained in a new jar I added to the library 
directory, like we do for custom query parsers.  The pluginInfo would get 
loaded by the SolrConfig constructor.  Then in the request handler definition 
in solrconfig.xml, I would list that type name "topic_based_generator" as one 
type that handler uses to generate queries.  Then in the initialization for the 
handler it would instantiate the correct QueryGenerator class that 
"topic_based_generator" mapped to.

I.e., the handler would check info.name (from PluginInfo info) for the matching 
QueryGenerator PlugInfos it got from the SolrConfig for the one matching 
"topic_based_generator" then call solrCore.createInitInstance for that class to 
get the generator itself.



> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this

[jira] [Commented] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143428#comment-14143428
 ] 

Brian commented on SOLR-6548:
-

I guess it may not be as simple as I thought at first glance.

As of right now with only that change, for new custom types, it seems like the 
code for initializing them would have to go in the initialization code of the 
component that uses them.  I.e., it would be along the same lines as the code 
used in SolrCore to initialize the different components, e.g., "initQParsers()" 
in SolrCore gets all the plugin classes found in the SolrConfig having the 
QParser type and instantiates each.  


Here's one way it could work:
For my custom request handler that makes use of the custom plugin type, in a 
SolrCoreAware inform method, it would get all the plugin classes defined in the 
SolrConfig extending the QueryGenerator plugin type, and based on which are 
specified in its parameters, it would instantiate an object of each 
QueryGenerator class as needed.  So the component that uses the custom plugin 
type could be responsible for instantiating and using them (the code could live 
there), which would still allow adding new types without any code changes to 
that component itself.

I.e., say I wanted to add a new query generator type "topic_based_generator", 
then I would define "topic_based_generator" with a particular QueryGenerator 
class in solrconfig.xml contained in a new jar I added to the library 
directory, like we do for custom query parsers.  The pluginInfo would get 
loaded by the SolrConfig constructor.  Then in the request handler definition 
in solrconfig.xml, I would list that type name "topic_based_generator" as one 
type that handler uses to generate queries.  Then in the initialization for the 
handler it would instantiate the correct QueryGenerator class that 
"topic_based_generator" mapped to.

I.e., the handler would check info.name (from PluginInfo info) for the matching 
QueryGenerator PlugInfos it got from the SolrConfig for the one matching 
"topic_based_generator" then call solrCore.createInitInstance for that class to 
get the generator itself.



> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this is not possible without modifying the solr core 
> code.
> All of the available plugin types and corresponding names are hard-coded in 
> the SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, 
> String name, InputSource is)" like:
> {code}
> loadPluginInfo(SolrRequestHandler.class,"requestHandler",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QParserPlugin.class,"queryParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
> {code}
> I propose adding these mappings to solrconfig.xml, or at least allow adding 
> additional plugin type mappings there.
> That way it would be easy to add new plugin types in the future, and users 
> could create their own custom plugin types as needed.  Also if the default 
> ones were solely defined in the list, they could then easily be overriden.
> At first glance it seems like the change to enable this could be relatively 
> simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
> has "plugin-type" entries each with a "name" and "class" sub-entry. 
> Then the constructor would read this list from the config  and then call the  
> loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6486) solr start script can have a debug flag option

2014-09-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6486.
--
   Resolution: Fixed
Fix Version/s: 5.0

> solr start script can have a debug flag option
> --
>
> Key: SOLR-6486
> URL: https://issues.apache.org/jira/browse/SOLR-6486
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6486.patch
>
>
> normally I would add this line to my java -jar start.jar
> -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983
> the script should take the whole string or assume the debug port to be 
> solrPort+1 (if all I pass is debug=true) or if 
> debug="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983) 
> use it verbatim



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6481) CLUSTERSTATUS action should consult /live_nodes when reporting the state of a replica

2014-09-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6481.
--
   Resolution: Fixed
Fix Version/s: 5.0

> CLUSTERSTATUS action should consult /live_nodes when reporting the state of a 
> replica
> -
>
> Key: SOLR-6481
> URL: https://issues.apache.org/jira/browse/SOLR-6481
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6481.patch
>
>
> CLUSTERSTATUS action reports the state of replicas but it doesn't check 
> /live_nodes, which means it might show a replica as "active" but the node is 
> down, so the real state is "down".
> Ideally, we need a helper method that gets replica status that consults live 
> nodes so this error doesn't keep cropping up all over the code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6484) SolrCLI's healthcheck action needs to check live nodes as part of reporting the status of a replica

2014-09-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6484.
--
   Resolution: Fixed
Fix Version/s: 5.0

> SolrCLI's healthcheck action needs to check live nodes as part of reporting 
> the status of a replica
> ---
>
> Key: SOLR-6484
> URL: https://issues.apache.org/jira/browse/SOLR-6484
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6484.patch
>
>
> Same basic issue as SOLR-6481



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6486) solr start script can have a debug flag option

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143417#comment-14143417
 ] 

ASF subversion and git services commented on SOLR-6486:
---

Commit 1626837 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626837 ]

SOLR-6486: solr start script can have a debug flag option; use -a to set 
arbitrary options.

> solr start script can have a debug flag option
> --
>
> Key: SOLR-6486
> URL: https://issues.apache.org/jira/browse/SOLR-6486
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Timothy Potter
> Attachments: SOLR-6486.patch
>
>
> normally I would add this line to my java -jar start.jar
> -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983
> the script should take the whole string or assume the debug port to be 
> solrPort+1 (if all I pass is debug=true) or if 
> debug="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983) 
> use it verbatim



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6486) solr start script can have a debug flag option

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143399#comment-14143399
 ] 

ASF subversion and git services commented on SOLR-6486:
---

Commit 1626833 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1626833 ]

SOLR-6486: solr start script can have a debug flag option; use -a to set 
arbitrary options.

> solr start script can have a debug flag option
> --
>
> Key: SOLR-6486
> URL: https://issues.apache.org/jira/browse/SOLR-6486
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Timothy Potter
> Attachments: SOLR-6486.patch
>
>
> normally I would add this line to my java -jar start.jar
> -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983
> the script should take the whole string or assume the debug port to be 
> solrPort+1 (if all I pass is debug=true) or if 
> debug="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983) 
> use it verbatim



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143395#comment-14143395
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1626830 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626830 ]

LUCENE-5969: let AssertingCodec implement RandomAccessOrds, and test it 
directly too

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 6.0
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6484) SolrCLI's healthcheck action needs to check live nodes as part of reporting the status of a replica

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143394#comment-14143394
 ] 

ASF subversion and git services commented on SOLR-6484:
---

Commit 1626829 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626829 ]

SOLR-6484: SolrCLI's healthcheck action needs to check live nodes as part of 
reporting the status of a replica

> SolrCLI's healthcheck action needs to check live nodes as part of reporting 
> the status of a replica
> ---
>
> Key: SOLR-6484
> URL: https://issues.apache.org/jira/browse/SOLR-6484
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6484.patch
>
>
> Same basic issue as SOLR-6481



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6481) CLUSTERSTATUS action should consult /live_nodes when reporting the state of a replica

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143393#comment-14143393
 ] 

ASF subversion and git services commented on SOLR-6481:
---

Commit 1626828 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626828 ]

SOLR-6481: CLUSTERSTATUS action should consult /live_nodes when reporting the 
state of a replica

> CLUSTERSTATUS action should consult /live_nodes when reporting the state of a 
> replica
> -
>
> Key: SOLR-6481
> URL: https://issues.apache.org/jira/browse/SOLR-6481
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6481.patch
>
>
> CLUSTERSTATUS action reports the state of replicas but it doesn't check 
> /live_nodes, which means it might show a replica as "active" but the node is 
> down, so the real state is "down".
> Ideally, we need a helper method that gets replica status that consults live 
> nodes so this error doesn't keep cropping up all over the code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143390#comment-14143390
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1626826 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1626826 ]

LUCENE-5969: let AssertingCodec implement RandomAccessOrds, and test it 
directly too

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 6.0
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_40-ea-b04) - Build # 4229 - Failure!

2014-09-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4229/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([FE701619B062323B:7F969801C73D5207]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequ

[jira] [Commented] (SOLR-6233) Provide basic command line tools for checking Solr status and health.

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143380#comment-14143380
 ] 

ASF subversion and git services commented on SOLR-6233:
---

Commit 1626824 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626824 ]

SOLR-6233: Minor refactorings provided by Steve Davids.

> Provide basic command line tools for checking Solr status and health.
> -
>
> Key: SOLR-6233
> URL: https://issues.apache.org/jira/browse/SOLR-6233
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 4.10, Trunk
>
> Attachments: SOLR-6233-minor-refactor.patch
>
>
> As part of the start script development work SOLR-3617, example restructuring 
> SOLR-3619, and the overall curb appeal work SOLR-4430, I'd like to have an 
> option on the SystemInfoHandler that gives a shorter, well formatted JSON 
> synopsis of essential information. I know "essential" is vague ;-) but right 
> now using curl to http://host:port/solr/admin/info/system?wt=json gives too 
> much information when I just want a synopsis of a Solr server. 
> Maybe something like &overview=true?
> Result would be:
> {noformat}
> {
>   "address": "http://localhost:8983/solr";,
>   "mode": "solrcloud",
>   "zookeeper": "localhost:2181/foo",
>   "uptime": "2 days, 3 hours, 4 minutes, 5 seconds",
>   "version": "5.0-SNAPSHOT",
>   "status": "healthy",
>   "memory": "4.2g of 6g"
> }
> {noformat}
> Now of course, one may argue all this information can be easily parsed from 
> the JSON but consider cross-platform command-line tools that don't have 
> immediate access to a JSON parser, such as the bin/solr start script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6484) SolrCLI's healthcheck action needs to check live nodes as part of reporting the status of a replica

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143375#comment-14143375
 ] 

ASF subversion and git services commented on SOLR-6484:
---

Commit 1626823 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1626823 ]

SOLR-6484: SolrCLI's healthcheck action needs to check live nodes as part of 
reporting the status of a replica

> SolrCLI's healthcheck action needs to check live nodes as part of reporting 
> the status of a replica
> ---
>
> Key: SOLR-6484
> URL: https://issues.apache.org/jira/browse/SOLR-6484
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6484.patch
>
>
> Same basic issue as SOLR-6481



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 95400 - Failure!

2014-09-22 Thread Robert Muir
I will take a look. I think this wait() is wrong in this case.

On Mon, Sep 22, 2014 at 12:05 PM,   wrote:
> Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/95400/
>
> 2 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.index.TestIndexFileDeleter
>
> Error Message:
> Suite timeout exceeded (>= 720 msec).
>
> Stack Trace:
> java.lang.Exception: Suite timeout exceeded (>= 720 msec).
> at __randomizedtesting.SeedInfo.seed([9E88DC4EE38EBEB9]:0)
>
>
> REGRESSION:  org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef
>
> Error Message:
> Test abandoned because suite timeout was reached.
>
> Stack Trace:
> java.lang.Exception: Test abandoned because suite timeout was reached.
> at __randomizedtesting.SeedInfo.seed([9E88DC4EE38EBEB9]:0)
>
>
>
>
> Build Log:
> [...truncated 1548 lines...]
>[junit4] Suite: org.apache.lucene.index.TestIndexFileDeleter
>[junit4]   2> sep 22, 2014 6:04:59 PM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
>[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.index.TestIndexFileDeleter
>[junit4]   2>  jstack at approximately timeout time 
>[junit4]   2> "Lucene Merge Thread #0" ID=282 BLOCKED on 
> org.apache.lucene.index.IndexWriter@c996c68 owned by 
> "TEST-TestIndexFileDeleter.testExcInDecRef-seed#[9E88DC4EE38EBEB9]" ID=281
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.getNextMerge(IndexWriter.java:1892)
>[junit4]   2>- blocked on 
> org.apache.lucene.index.IndexWriter@c996c68
>[junit4]   2>at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:486)
>[junit4]   2>
>[junit4]   2> 
> "TEST-TestIndexFileDeleter.testExcInDecRef-seed#[9E88DC4EE38EBEB9]" ID=281 
> WAITING on 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread@41594b02
>[junit4]   2>at java.lang.Object.wait(Native Method)
>[junit4]   2>- waiting on 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread@41594b02
>[junit4]   2>at java.lang.Thread.join(Thread.java:1281)
>[junit4]   2>at java.lang.Thread.join(Thread.java:1355)
>[junit4]   2>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.sync(ConcurrentMergeScheduler.java:281)
>[junit4]   2>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.close(ConcurrentMergeScheduler.java:262)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:1955)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:1930)
>[junit4]   2>- locked java.lang.Object@2c9c264e
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4389)
>[junit4]   2>- locked java.lang.Object@2c9c264e
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.finishCommit(IndexWriter.java:2915)
>[junit4]   2>- locked org.apache.lucene.index.IndexWriter@c996c68
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2888)
>[junit4]   2>- locked java.lang.Object@2c9c264e
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2848)
>[junit4]   2>at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:339)
>[junit4]   2>at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:271)
>[junit4]   2>at 
> org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef(TestIndexFileDeleter.java:466)
>[junit4]   2>at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]   2>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>[junit4]   2>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]   2>at java.lang.reflect.Method.invoke(Method.java:606)
>[junit4]   2>at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
>[junit4]   2>at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
>[junit4]   2>at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>[junit4]   2>at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
>[junit4]   2>at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>[junit4]   2>at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>[junit4]   2>at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvar

[jira] [Commented] (SOLR-6481) CLUSTERSTATUS action should consult /live_nodes when reporting the state of a replica

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143355#comment-14143355
 ] 

ASF subversion and git services commented on SOLR-6481:
---

Commit 1626818 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1626818 ]

SOLR-6481: CLUSTERSTATUS action should consult /live_nodes when reporting the 
state of a replica

> CLUSTERSTATUS action should consult /live_nodes when reporting the state of a 
> replica
> -
>
> Key: SOLR-6481
> URL: https://issues.apache.org/jira/browse/SOLR-6481
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6481.patch
>
>
> CLUSTERSTATUS action reports the state of replicas but it doesn't check 
> /live_nodes, which means it might show a replica as "active" but the node is 
> down, so the real state is "down".
> Ideally, we need a helper method that gets replica status that consults live 
> nodes so this error doesn't keep cropping up all over the code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Help with `ant beast`

2014-09-22 Thread Chris Hostetter

: I would prefer to change the parent modules to have an explicit target. See 
my other message.

you're missing the point -- you can do both.

Add an explicit top level beast target w/whatever message you think is 
best.  but in general, if you've got a target that has preconditions, and 
you don't like the message that target fails with when the preconditions 
aren't met, then add an explicit check of those preconditions and fail 
with a better message.

yes junit.classpath is an implementation detail, but it's a detail of the 
same target where you can add this  ... whatever the implementation 
details are, you can add the same check.

that way you are protecting hte target (and it's error messages) from any 
possible overside in forgetting to add an explicit "override" with a 
helpful error, or if the build.xml files are refactored, or if the 
directories are moved arround, etc...

: 
: -
: Uwe Schindler
: H.-H.-Meier-Allee 63, D-28213 Bremen
: http://www.thetaphi.de
: eMail: u...@thetaphi.de
: 
: 
: > -Original Message-
: > From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
: > Sent: Friday, September 19, 2014 6:56 PM
: > To: dev@lucene.apache.org
: > Subject: RE: Help with `ant beast`
: > 
: > :
: > : This is correct! Maybe we can improve the error message, but this is not
: > : so easy... What is the best way to detect if a build file is a parent
: > : one? Of course we could add a dummy target to all parent build files -
: > : "ant test" has this to delegate to subant builds, but we don’t want to
: > : do this here.
: > 
: > can we just add something like this inside the "beast" target?...
: > 
: > 
: > 
: >   
: > 
: >   
: > 
: > 
: > 
: > 
: > :
: > : Uwe
: > :
: > : -
: > : Uwe Schindler
: > : H.-H.-Meier-Allee 63, D-28213 Bremen
: > : http://www.thetaphi.de
: > : eMail: u...@thetaphi.de
: > :
: > :
: > : > -Original Message-
: > : > From: Steve Rowe [mailto:sar...@gmail.com]
: > : > Sent: Friday, September 19, 2014 5:54 PM
: > : > To: dev@lucene.apache.org
: > : > Subject: Re: Help with `ant beast`
: > : >
: > : > I think ‘ant beast’ only works in the directory of the module 
containing the
: > : > test, not at a higher level.
: > : >
: > : > On Sep 19, 2014, at 11:45 AM, Ramkumar R. Aiyengar
: > : >  wrote:
: > : >
: > : > > I am trying to use `ant beast` on trunk (per the recommendation in 
test-
: > : > help) and getting this error:
: > : > >
: > : > > ~/lucene-solr/lucene> ant beast -Dbeast.iters=10 -Dtests.dups=6
: > : > > -Dtestcase=TestBytesStore
: > : > >
: > : > > -beast:
: > : > >   [beaster] Beast round: 1
: > : > >
: > : > > BUILD FAILED
: > : > > ~/lucene-solr/lucene/common-build.xml:1363: The following error
: > : > occurred while executing this line:
: > : > > ~/lucene-solr/lucene/common-build.xml:1358: The following error
: > : > occurred while executing this line:
: > : > > ~/lucene-solr/lucene/common-build.xml:961: Reference junit.classpath
: > : > not found.
: > : > >
: > : > > `ant test` works just fine. Any idea where the problem might be?
: > : >
: > : >
: > : > -
: > : > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
: > additional
: > : > commands, e-mail: dev-h...@lucene.apache.org
: > :
: > :
: > : -
: > : To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: > : For additional commands, e-mail: dev-h...@lucene.apache.org
: > :
: > :
: > 
: > -Hoss
: > http://www.lucidworks.com/
: 
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143351#comment-14143351
 ] 

Hoss Man commented on SOLR-6548:


I'm missing something fundemental to your suggestion here ... you have 
configuration to tell you what types of plugin configurations are supported? 
... and then what?  loadPluginInfo creates a PluginInfo object corrisponding 
and what happens next? ... what piece of code does something with those 
PluginInfo objects?

i don't understand how you can have user defined _types_ of plugins w/o some 
code in solr knowing aout those types of plugins and invoking them.

> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this is not possible without modifying the solr core 
> code.
> All of the available plugin types and corresponding names are hard-coded in 
> the SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, 
> String name, InputSource is)" like:
> {code}
> loadPluginInfo(SolrRequestHandler.class,"requestHandler",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QParserPlugin.class,"queryParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
> {code}
> I propose adding these mappings to solrconfig.xml, or at least allow adding 
> additional plugin type mappings there.
> That way it would be easy to add new plugin types in the future, and users 
> could create their own custom plugin types as needed.  Also if the default 
> ones were solely defined in the list, they could then easily be overriden.
> At first glance it seems like the change to enable this could be relatively 
> simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
> has "plugin-type" entries each with a "name" and "class" sub-entry. 
> Then the constructor would read this list from the config  and then call the  
> loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 95400 - Failure!

2014-09-22 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/95400/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexFileDeleter

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([9E88DC4EE38EBEB9]:0)


REGRESSION:  org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([9E88DC4EE38EBEB9]:0)




Build Log:
[...truncated 1548 lines...]
   [junit4] Suite: org.apache.lucene.index.TestIndexFileDeleter
   [junit4]   2> sep 22, 2014 6:04:59 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> ADVERTENCIA: Suite execution timed out: 
org.apache.lucene.index.TestIndexFileDeleter
   [junit4]   2>  jstack at approximately timeout time 
   [junit4]   2> "Lucene Merge Thread #0" ID=282 BLOCKED on 
org.apache.lucene.index.IndexWriter@c996c68 owned by 
"TEST-TestIndexFileDeleter.testExcInDecRef-seed#[9E88DC4EE38EBEB9]" ID=281
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.getNextMerge(IndexWriter.java:1892)
   [junit4]   2>- blocked on org.apache.lucene.index.IndexWriter@c996c68
   [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:486)
   [junit4]   2> 
   [junit4]   2> 
"TEST-TestIndexFileDeleter.testExcInDecRef-seed#[9E88DC4EE38EBEB9]" ID=281 
WAITING on org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread@41594b02
   [junit4]   2>at java.lang.Object.wait(Native Method)
   [junit4]   2>- waiting on 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread@41594b02
   [junit4]   2>at java.lang.Thread.join(Thread.java:1281)
   [junit4]   2>at java.lang.Thread.join(Thread.java:1355)
   [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler.sync(ConcurrentMergeScheduler.java:281)
   [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler.close(ConcurrentMergeScheduler.java:262)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:1955)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:1930)
   [junit4]   2>- locked java.lang.Object@2c9c264e
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4389)
   [junit4]   2>- locked java.lang.Object@2c9c264e
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.finishCommit(IndexWriter.java:2915)
   [junit4]   2>- locked org.apache.lucene.index.IndexWriter@c996c68
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2888)
   [junit4]   2>- locked java.lang.Object@2c9c264e
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2848)
   [junit4]   2>at 
org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:339)
   [junit4]   2>at 
org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:271)
   [junit4]   2>at 
org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef(TestIndexFileDeleter.java:466)
   [junit4]   2>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2>at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   [junit4]   2>at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2>at java.lang.reflect.Method.invoke(Method.java:606)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
   [junit4]   2>at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
   [junit4]   2>at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
   [junit4]   2>at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
   [junit4]   2>at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
   [junit4]   2>

[jira] [Comment Edited] (SOLR-6266) Couchbase plug-in for Solr

2014-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143257#comment-14143257
 ] 

Joel Bernstein edited comment on SOLR-6266 at 9/22/14 3:41 PM:
---

>From my understanding the CAPIServer is listening on an ip/port. Couchbase can 
>be configured to replicate a bucket to a specific host and port.  So running 
>the CAPIServer on each replica just means that there will be many CAPIServers 
>listening. The actual replication session will be between Couchbase and a 
>single CAPIServer. So in a single replication session documents will flow to 
>one CAPIServer and that CAPIServer will move the documents into the 
>distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer only on leaders then we need to manage bringing up 
and down CAPIServers, This adds unneeded complexity to the implementation and 
tests. 

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud will ensure that documents get to the 
right place.

What we do have to consider very carefully is whether we need a CAPIServer 
running per Collection or per Solr node, because this effects the entire design.

My thinking is that we should have a single CAPIServer per Solr node to service 
all collections. I'm assuming that the CAPIServer has thread overhead that we 
don't want for each collection. 

But if we decide to go this route, then we will need to route documents to 
correct collections based on the bucket name. We'll need to also figure out 
where to place the CAPIServer so there is only one per node. 







was (Author: joel.bernstein):
>From my understanding the CAPIServer is listening on an ip/port. Couchbase can 
>be configured to replicate a bucket to a specific host and post.  So running 
>the CAPIServer just means that there will be many CAPIServers running. The 
>actual replication session will be between Couchbase and a single CAPIServer. 
>So in a single repication session documents will flow to one CAPIServer and 
>that CAPIServer will move the documents into the distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer only on leaders then we need to manage bringing up 
and down CAPIServers, This adds unneeded complexity to the implementation and 
tests. 

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 






> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz, 
> solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, the

[jira] [Commented] (SOLR-6233) Provide basic command line tools for checking Solr status and health.

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143289#comment-14143289
 ] 

ASF subversion and git services commented on SOLR-6233:
---

Commit 1626802 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1626802 ]

SOLR-6233: Minor refactorings provided by Steve Davids.

> Provide basic command line tools for checking Solr status and health.
> -
>
> Key: SOLR-6233
> URL: https://issues.apache.org/jira/browse/SOLR-6233
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 4.10, Trunk
>
> Attachments: SOLR-6233-minor-refactor.patch
>
>
> As part of the start script development work SOLR-3617, example restructuring 
> SOLR-3619, and the overall curb appeal work SOLR-4430, I'd like to have an 
> option on the SystemInfoHandler that gives a shorter, well formatted JSON 
> synopsis of essential information. I know "essential" is vague ;-) but right 
> now using curl to http://host:port/solr/admin/info/system?wt=json gives too 
> much information when I just want a synopsis of a Solr server. 
> Maybe something like &overview=true?
> Result would be:
> {noformat}
> {
>   "address": "http://localhost:8983/solr";,
>   "mode": "solrcloud",
>   "zookeeper": "localhost:2181/foo",
>   "uptime": "2 days, 3 hours, 4 minutes, 5 seconds",
>   "version": "5.0-SNAPSHOT",
>   "status": "healthy",
>   "memory": "4.2g of 6g"
> }
> {noformat}
> Now of course, one may argue all this information can be easily parsed from 
> the JSON but consider cross-platform command-line tools that don't have 
> immediate access to a JSON parser, such as the bin/solr start script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian updated SOLR-6548:

Description: 
I wanted to add a custom plugin type so that for a new request handler that 
uses different types of query generators it would be easy to add new query 
generators in the future through the plugin system, without having to touch the 
handler code.  To do so I wanted to add a "queryGenerator" plugin type.

As far as I can tell this is not possible without modifying the solr core code.

All of the available plugin types and corresponding names are hard-coded in the 
SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, String 
name, InputSource is)" like:

{code}
loadPluginInfo(SolrRequestHandler.class,"requestHandler",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QParserPlugin.class,"queryParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
{code}

I propose adding these mappings to solrconfig.xml, or at least allow adding 
additional plugin type mappings there.

That way it would be easy to add new plugin types in the future, and users 
could create their own custom plugin types as needed.  Also if the default ones 
were solely defined in the list, they could then easily be overriden.

At first glance it seems like the change to enable this could be relatively 
simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
has "plugin-type" entries each with a "name" and "class" sub-entry. 

Then the constructor would read this list from the config  and then call the  
loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each entry.



  was:
I wanted to add a custom plugin type so that for a new request handler that 
uses different types of query generators it would be easy to add new query 
generators in the future through the plugin system, without having to touch the 
handler code.  To do so I wanted to add a "queryGenerator" plugin type.

As far as I can tell this is not possible without modifying the solr core code.

All of the available plugin types and corresponding names are hard-coded in the 
SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, String 
name, InputSource is)" like:

{code}
loadPluginInfo(SolrRequestHandler.class,"requestHandler",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QParserPlugin.class,"queryParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
{code}

I propose adding these mappings to solrconfig.xml, or at least allow adding 
additional plugin type mappings there.

That way it would be easy to add new plugin types in the future, and users 
could create their own custom plugin types as needed.  Also if the default ones 
were also solely defined in the list, they could then easily be overriden.

At first glance it seems like the change to enable this could be relatively 
simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
has "plugin-type" entries each with a "name" and "class" sub-entry. 

Then the constructor would read this list from the config  and for each one 
call the  loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each.




> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this is not possible without modifying the solr core 
> code.
> All of the available plugin types and corresponding names are hard-coded in 
> the SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, 
> String name, InputSource is)" like:
> {code}
> loadPluginInfo(SolrRequestHandler.class,"requestHandler",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QParserPlugin.class,"queryParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
> 

[jira] [Updated] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian updated SOLR-6548:

Description: 
I wanted to add a custom plugin type so that for a new request handler that 
uses different types of query generators it would be easy to add new query 
generators in the future through the plugin system, without having to touch the 
handler code.  To do so I wanted to add a "queryGenerator" plugin type.

As far as I can tell this is not possible without modifying the solr core code.

All of the available plugin types and corresponding names are hard-coded in the 
SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, String 
name, InputSource is)" like:

{code}
loadPluginInfo(SolrRequestHandler.class,"requestHandler",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QParserPlugin.class,"queryParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
{code}

I propose adding these mappings to solrconfig.xml, or at least allow adding 
additional plugin type mappings there.

That way it would be easy to add new plugin types in the future, and users 
could create their own custom plugin types as needed.  Also if the default ones 
were also solely defined in the list, they could then easily be overriden.

At first glance it seems like the change to enable this could be relatively 
simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
has "plugin-type" entries each with a "name" and "class" sub-entry. 

Then the constructor would read this list from the config  and for each one 
call the  loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each.



  was:
I wanted to add a custom plugin type so that for a new request handler that 
uses different types of query generators it would be easy to add new query 
generators in the future through the plugin system, without having to touch the 
handler code.  To do so I wanted to add a "queryGenerator" plugin type.

As far as I can tell this is not possible without modifying the solr core code.

All of the available plugin types and corresponding names are hard-coded in the 
SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, String 
name, InputSource is)" like:


loadPluginInfo(SolrRequestHandler.class,"requestHandler",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QParserPlugin.class,"queryParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);


I propose adding these mappings to solrconfig.xml, or at least allow adding 
additional plugin type mappings there.

That way it would be easy to add new plugin types in the future, and users 
could create their own custom plugin types as needed.  Also if the default ones 
were also solely defined in the list, they could then easily be overriden.

At first glance it seems like the change to enable this could be relatively 
simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
has "plugin-type" entries each with a "name" and "class" sub-entry. 

Then the constructor would read this list from the config  and for each one 
call the  loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each.




> Enable custom plugin types via configuration
> 
>
> Key: SOLR-6548
> URL: https://issues.apache.org/jira/browse/SOLR-6548
> Project: Solr
>  Issue Type: Improvement
>Reporter: Brian
>Priority: Minor
>
> I wanted to add a custom plugin type so that for a new request handler that 
> uses different types of query generators it would be easy to add new query 
> generators in the future through the plugin system, without having to touch 
> the handler code.  To do so I wanted to add a "queryGenerator" plugin type.
> As far as I can tell this is not possible without modifying the solr core 
> code.
> All of the available plugin types and corresponding names are hard-coded in 
> the SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, 
> String name, InputSource is)" like:
> {code}
> loadPluginInfo(SolrRequestHandler.class,"requestHandler",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  loadPluginInfo(QParserPlugin.class,"queryParser",
> REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
>  

[jira] [Created] (SOLR-6548) Enable custom plugin types via configuration

2014-09-22 Thread Brian (JIRA)
Brian created SOLR-6548:
---

 Summary: Enable custom plugin types via configuration
 Key: SOLR-6548
 URL: https://issues.apache.org/jira/browse/SOLR-6548
 Project: Solr
  Issue Type: Improvement
Reporter: Brian
Priority: Minor


I wanted to add a custom plugin type so that for a new request handler that 
uses different types of query generators it would be easy to add new query 
generators in the future through the plugin system, without having to touch the 
handler code.  To do so I wanted to add a "queryGenerator" plugin type.

As far as I can tell this is not possible without modifying the solr core code.

All of the available plugin types and corresponding names are hard-coded in the 
SolrConfig constructor "public SolrConfig(SolrResourceLoader loader, String 
name, InputSource is)" like:


loadPluginInfo(SolrRequestHandler.class,"requestHandler",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QParserPlugin.class,"queryParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(QueryResponseWriter.class,"queryResponseWriter",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);
 loadPluginInfo(ValueSourceParser.class,"valueSourceParser",
REQUIRE_NAME, REQUIRE_CLASS, MULTI_OK);


I propose adding these mappings to solrconfig.xml, or at least allow adding 
additional plugin type mappings there.

That way it would be easy to add new plugin types in the future, and users 
could create their own custom plugin types as needed.  Also if the default ones 
were also solely defined in the list, they could then easily be overriden.

At first glance it seems like the change to enable this could be relatively 
simple. Adding a new "plugin-types" entry to the Solr configuration xml that 
has "plugin-type" entries each with a "name" and "class" sub-entry. 

Then the constructor would read this list from the config  and for each one 
call the  loadPluginInfo(pluginTypeClass, pluginTypeName, ...) method on each.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6266) Couchbase plug-in for Solr

2014-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143257#comment-14143257
 ] 

Joel Bernstein edited comment on SOLR-6266 at 9/22/14 2:52 PM:
---

>From my understanding the CAPIServer is listening on an ip/port. Couchbase can 
>be configured to replicate a bucket to a specific host and post.  So running 
>the CAPIServer just means that there will be many CAPIServers running. The 
>actual replication session will be between Couchbase and a single CAPIServer. 
>So in a single repication session documents will flow to one CAPIServer and 
>that CAPIServer will move the documents into the distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer only on leaders then we need to manage bringing up 
and down CAPIServers, This adds unneeded complexity to the implementation and 
tests. 

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 







was (Author: joel.bernstein):
>From my understanding the CAPIServer is listening on a port. Couchbase can be 
>configured to replicate a bucket to a specific host and post.  So running the 
>CAPIServer just means that there will be many CAPIServers running. The actual 
>replication session will be between Couchbase and a single CAPIServer. So in a 
>single repication session documents will flow to one CAPIServer and that 
>CAPIServer will move the documents into the distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer only on leaders then we need to manage bringing up 
and down CAPIServers, This adds unneeded complexity to the implementation and 
tests. 

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 






> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz, 
> solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the couchba

[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143261#comment-14143261
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1626796 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626796 ]

LUCENE-5969: improve this test, we cant uncomment the assert until we fix 5.0 
codec

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 6.0
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143258#comment-14143258
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1626794 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1626794 ]

LUCENE-5969: improve this test, we cant uncomment the assert until we fix 5.0 
codec

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 6.0
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6266) Couchbase plug-in for Solr

2014-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143257#comment-14143257
 ] 

Joel Bernstein edited comment on SOLR-6266 at 9/22/14 2:48 PM:
---

>From my understanding the CAPIServer is listening on a port. Couchbase can be 
>configured to replicate a bucket to a specific host and post.  So running the 
>CAPIServer just means that there will be many CAPIServers running. The actual 
>replication session will be between Couchbase and a single CAPIServer. So in a 
>single repication session documents will flow to one CAPIServer and that 
>CAPIServer will move the documents into the distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer only on leaders then we need to manage bringing up 
and down CAPIServers, This adds unneeded complexity to the implementation and 
tests. 

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 







was (Author: joel.bernstein):
>From my understanding the CAPIServer is listening on a port. Couchbase can be 
>configured to replicate a bucket to a specific host and post.  So running the 
>CAPIServer just means that there will be many CAPIServers running. The actual 
>replication session will be between Couchbase and a single CAPIServer. So in a 
>single repication session documents will flow to one CAPIServer and that 
>CAPIServer will move the documents into the distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer only on leaders then we need to manage bringing up 
and down CAPIServers, This adds added complexity to the implementation and 
tests. 

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 






> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz, 
> solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the couchbase 
> u

[jira] [Comment Edited] (SOLR-6266) Couchbase plug-in for Solr

2014-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143257#comment-14143257
 ] 

Joel Bernstein edited comment on SOLR-6266 at 9/22/14 2:47 PM:
---

>From my understanding the CAPIServer is listening on a port. Couchbase can be 
>configured to replicate a bucket to a specific host and post.  So running the 
>CAPIServer just means that there will be many CAPIServers running. The actual 
>replication session will be between Couchbase and a single CAPIServer. So in a 
>single repication session documents will flow to one CAPIServer and that 
>CAPIServer will move the documents into the distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer only on leaders then we need to manage bringing up 
and down CAPIServers, This adds added complexity to the implementation and 
tests. 

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 







was (Author: joel.bernstein):
>From my understanding the CAPIServer is listening on a port. Couchbase can be 
>configured to replicate a bucket to a specific host and post.  So running the 
>CAPIServer just means that there will be many CAPIServers running. The actual 
>replication session will be between Couchbase and a single CAPIServer. So in a 
>single repication session documents will flow to one CAPIServer and that 
>CAPIServer will move the documents into the distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer everywhere we don't have to manage bringing 
CAPIServers up and down as the leader changes. So this removes quite a bit of 
complexity from the design.

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 






> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz, 
> solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the c

[jira] [Comment Edited] (SOLR-6266) Couchbase plug-in for Solr

2014-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143257#comment-14143257
 ] 

Joel Bernstein edited comment on SOLR-6266 at 9/22/14 2:45 PM:
---

>From my understanding the CAPIServer is listening on a port. Couchbase can be 
>configured to replicate a bucket to a specific host and post.  So running the 
>CAPIServer just means that there will be many CAPIServers running. The actual 
>replication session will be between Couchbase and a single CAPIServer. So in a 
>single repication session documents will flow to one CAPIServer and that 
>CAPIServer will move the documents into the distributed indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer everywhere we don't have to manage bringing 
CAPIServers up and down as the leader changes. So this removes quite a bit of 
complexity from the design.

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 







was (Author: joel.bernstein):
>From my understanding the CAPIServer is listening on a port. Couchbase can be 
>configured to replicate a bucket to a specific host and post.  So running the 
>CAPIServer just means that there will be many CAPIServers running. The actual 
>replication session will be between Couchbase and a single CAPIServer. So in a 
>single repication session documents will flow to one CAPIServer and that 
>CAPIServer and that Solr instance move the documents into the distributed 
>indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer everywhere we don't have to manage bringing 
CAPIServers up and down as the leader changes. So this removes quite a bit of 
complexity from the design.

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 






> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz, 
> solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming update

[jira] [Commented] (SOLR-6266) Couchbase plug-in for Solr

2014-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143257#comment-14143257
 ] 

Joel Bernstein commented on SOLR-6266:
--

>From my understanding the CAPIServer is listening on a port. Couchbase can be 
>configured to replicate a bucket to a specific host and post.  So running the 
>CAPIServer just means that there will be many CAPIServers running. The actual 
>replication session will be between Couchbase and a single CAPIServer. So in a 
>single repication session documents will flow to one CAPIServer and that 
>CAPIServer and that Solr instance move the documents into the distributed 
>indexing flow.

>From this scenario running a CAPIServer on all replicas really has no 
>downside. 

But running the CAPIServer from just the leader has a couple of major downsides:

1) Leaders and replicas will change. Couchbase is pointing directly to an 
ip:port. If all of sudden that node is no longer the leader then replication 
has stopped. If the CAPIServer is running on all replicas then this is not an 
issue. 

2) If we run the CAPIServer everywhere we don't have to manage bringing 
CAPIServers up and down as the leader changes. So this removes quite a bit of 
complexity from the design.

We don't have to worry about duplicate indexing on shards by running 
CAPIServers on the replicas. If we inject the documents properly into the 
SolrCloud indexing flow, then SolrCloud with ensure that documents get to the 
right place.

What we do have to consider very carefully though is whether we need a 
CAPIServer running per Collection or per Solr node, because this effect the 
entire design.

My thinking is that we should have a single CAPIServer per Solr node to 
services all collections. I'm assuming that the CAPIServer has thread overhead 
that we don't want for each collection. 

But if we decide to go this route then we will need to route documents to 
correct collection based on the bucket name. We'll need to also figure out how 
to place the CAPIServer so there is only one per node. 






> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz, 
> solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the couchbase 
> updates into the normal Solr update process. 
> Instead of marshaling couchbase updates into the normal Solr update process, 
> we could also embed a SolrJ client to relay the request through the http 
> interfaces. This may be necessary if we have to handle mapping couchbase 
> "buckets" to Solr collections on the Solr side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-09-22 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-5911:
--
Attachment: LUCENE-5911.patch

In the end I decided that the nicest API change was to add a freeze() method 
which prepares the internal data structures for querying, and prevents users 
adding new fields.  You can continue to use MemoryIndex as in the past, but if 
you want to search in multiple threads, call freeze() first.

Also adds a new test, and renames the existing test to be a bit more 
descriptive of what it's actually doing.

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143229#comment-14143229
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1626781 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626781 ]

LUCENE-5969: add TestUtil.getDefaultCodec()

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 6.0
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143224#comment-14143224
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1626778 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1626778 ]

LUCENE-5969: add TestUtil.getDefaultCodec()

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 6.0
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143219#comment-14143219
 ] 

Michael McCandless commented on LUCENE-5879:



bq. So for the whole segment, does it decide to insert auto-prefix'es at 
specific byte lengths (e.g. 3, 5, and 7)? Or does it vary based on specific 
terms? I'm hoping it's smart enough to vary based on specific terms. For 
example if, hypothetically there were lots of terms that had this common 
prefix: "BGA" then it might decide "BGA" makes a good auto-prefix but not 
necessarily all terms at length 3 since many others might not make good 
prefixes. Make sense?

It's dynamic, based on how terms occupy the space.

Today (and we can change this: it's an impl. detail) it assigns
prefixes just like it assigns terms to blocks.  Ie, when it sees a
given prefix matches 25 - 48 terms, it inserts a new auto-prefix
term.  That auto-prefix term replaces those 25 - 48 other terms with 1
term, and then the process "recurses", i.e. you can then have a
shorter auto-prefix term matching 25 - 48 other normal and/or
auto-prefix terms.

bq. At a low level, do I take advantage of this in the same way that I might do 
so at a high level using PrefixQuery and then getting the weight then getting 
the scorer to iterate docIds? Or is there a lower-level path? Although there is 
some elegance to not introducing new APIs, I think it's worth exploring having 
prefix & range capabilities be on the TermsEnum in some way.

What this patch does is generalize/relax Terms.intersect: that method
no longer guarantees that you see "true" terms.  Rather, it now
guarantees only that the docIDs you visit will be the same.

So to take advantage of it, you need to pass an Automaton to
Terms.intersect and then not care about which terms you see, only the
docIDs after visiting the DocsEnum across all terms it returned to
you.

bq. Do you envision other posting formats being able to re-use the logic here?  
That would be nice.

I agree it would be nice ... and the index-time logic that identifies
the auto-prefix terms is quite standalone, so e.g. we could pull it
out and have it "wrap" the incoming Fields to insert the auto-prefix
terms.  This way it's nearly transparent to any postings format ...

But the problem is, at search time, there's tricky logic in
intersect() to use these prefix terms ... factoring that out so other
formats can use it is trickier I think... though maybe we could fold
it into the default Terms.intersect() impl...

bq. In your future tuning, I suggest you give the ability to vary the 
conservative vs aggressive prefixing based on the very beginning and very end 
(assuming known common lengths).  In the FlexPrefixTree Varun (GSOC) worked on, 
the leaves per level is configurable at each level (i.e. prefix length)... and 
it's better to have little prefixing at the very top and little at the bottom 
too.  At the top, prefixes only help for queries span massive portions of the 
possible term space (which in spatial is rare; likely other apps too).  And at 
the bottom (long prefixes) just shy of the maximum length (say 7 bytes out of 8 
for a double), there is marginal value because in the spatial search algorithm, 
the bottom detail is scan'ed over (e.g. TermsEnum.next()) instead of seek'ed, 
because the data is less dense and it's adjacent.  This principle may apply to 
numeric-range queries depending on how they are coded; I'm not sure.

I agree this (how auto-prefix terms are assigned) needs more control /
experimenting.  Really the best prefixes are a function not only of
how the terms were distributed, but also of how queries will "likely"
ask for ranges.

I think it's similar to postings skip lists, where we have different
frequency of a skip pointer on the "leaf" level vs the "upper" skip
levels.


> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distrib

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_20) - Build # 4331 - Still Failing!

2014-09-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4331/
Java: 32bit/jdk1.8.0_20 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.SolrSchemalessExampleTest.testAddDelete

Error Message:
IOException occured when talking to server at: 
https://127.0.0.1:52361/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:52361/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([E031A2887B0FFB2A:28D1DF828AA728FC]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:563)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146)
at 
org.apache.solr.client.solrj.SolrExampleTestsBase.testAddDelete(SolrExampleTestsBase.java:210)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.T

Re: continuing gc when too many search threads

2014-09-22 Thread Shawn Heisey
On 9/22/2014 6:42 AM, Li Li wrote:
>   I have an index of about 30 million short strings, the index size is
> about 3GB in disk
>  I have give jvm 5gb memory with default setting in ubuntu 12.04 of sun jdk 7.
>  When I use 20 theads, it's ok. But If I run 30 threads. After a
> while. The jvm is doing nothing but gc.

When your JVM is doing constant GC, your heap isn't big enough to do the
job it's being asked to do by the software.  Your options are to
increase your heap size, or change how your program works so that it
needs less heap.

Right now, if you have the memory available, simply make your heap
larger.  I'm not really sure how to reduce heap requirements for a
custom Lucene program, although if any of these notes for Solr (which is
of course a Lucene program) can be helpful to you, here they are:

http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



continuing gc when too many search threads

2014-09-22 Thread Li Li
  I have an index of about 30 million short strings, the index size is
about 3GB in disk
 I have give jvm 5gb memory with default setting in ubuntu 12.04 of sun jdk 7.
 When I use 20 theads, it's ok. But If I run 30 threads. After a
while. The jvm is doing nothing but gc.
  lucene version si 4.10.0

  I use jmap -histo:live  and find so many
ConcurrentHashMap$HashEntry. When with 20 threads, not so many
ConcurrentHashMap$HashEntry. What's wrong with?

 1:  33770946 1080670272
java.util.concurrent.ConcurrentHashMap$HashEntry

   2:  33769230 1080615360
org.apache.lucene.util.WeakIdentityMap$IdentityWeakReference

   3:   753  248541760
[Ljava.util.concurrent.ConcurrentHashMap$HashEntry;

   4:  6046   28941064  [B

   5:   856   11564672  [J

   6:1781115717592  [C

   7:134266552  java.lang.String

   8: 814082676472  [Ljava.lang.Object;

   9: 130282586600  [I

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >