[jira] [Updated] (SOLR-6675) Solr webapp deployment is very slow with in solrconfig.xml

2014-12-14 Thread Forest Soup (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Forest Soup updated SOLR-6675:
--
Attachment: 1014.zip

The 0001.txt and 0002.txt are the dump files before solr webapp is deployed. 
The 0003.txt is the dump file after solr webapp is deployed.

> Solr webapp deployment is very slow with  in solrconfig.xml
> -
>
> Key: SOLR-6675
> URL: https://issues.apache.org/jira/browse/SOLR-6675
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.7
> Environment: Linux Redhat 64bit
>Reporter: Forest Soup
>Priority: Critical
>  Labels: performance
> Attachments: 1014.zip, callstack.png
>
>
> We have a SolrCloud with Solr version 4.7 with Tomcat 7. And our solr 
> index(cores) are big(50~100G) each core. 
> When we start up tomcat, the solr webapp deployment is very slow. From 
> tomcat's catalina log, every time it takes about 10 minutes to get deployed. 
> After we analyzing java core dump, we notice it's because the loading process 
> cannot finish until the MBean calculation for large index is done.
>  
> So we tried to remove the  from solrconfig.xml, after that, the loading 
> of solr webapp only take about 1 minute. So we can sure the MBean calculation 
> for large index is the root cause.
> Could you please point me if there is any async way to do statistic 
> monitoring without  in solrconfig.xml, or let it do calculation after 
> the deployment? Thanks!
> The callstack.png file in the attachment is the call stack of the long 
> blocking thread which is doing statistics calculation.
> The catalina log of tomcat:
> INFO: Starting Servlet Engine: Apache Tomcat/7.0.54
> Oct 13, 2014 2:00:29 AM org.apache.catalina.startup.HostConfig deployWAR
> INFO: Deploying web application archive 
> /opt/ibm/solrsearch/tomcat/webapps/solr.war
> Oct 13, 2014 2:10:23 AM org.apache.catalina.startup.HostConfig deployWAR
> INFO: Deployment of web application archive 
> /opt/ibm/solrsearch/tomcat/webapps/solr.war has finished in 594,325 ms 
> < Time taken for solr app Deployment is about 10 minutes 
> ---
> Oct 13, 2014 2:10:23 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/manager
> Oct 13, 2014 2:10:26 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/manager has finished in 2,035 ms
> Oct 13, 2014 2:10:26 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/examples
> Oct 13, 2014 2:10:27 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/examples has finished in 1,789 ms
> Oct 13, 2014 2:10:27 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/docs
> Oct 13, 2014 2:10:28 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/docs has finished in 1,037 ms
> Oct 13, 2014 2:10:28 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/ROOT
> Oct 13, 2014 2:10:29 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/ROOT has finished in 948 ms
> Oct 13, 2014 2:10:29 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/host-manager
> Oct 13, 2014 2:10:30 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/host-manager has finished in 951 ms
> Oct 13, 2014 2:10:31 AM org.apache.coyote.AbstractProtocol start
> INFO: Starting ProtocolHandler ["http-bio-8080"]
> Oct 13, 2014 2:10:31 AM org.apache.coyote.AbstractProtocol start
> INFO: Starting ProtocolHandler ["ajp-bio-8009"]
> Oct 13, 2014 2:10:31 AM org.apache.catalina.startup.Catalina start
> INFO: Server startup in 601506 ms



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5882) Support scoreMode parameter for BlockJoinParentQParser

2014-12-14 Thread ash fo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246407#comment-14246407
 ] 

ash fo commented on SOLR-5882:
--

How do I apply this patch on windows? I am using Bitnami Solr 4.10.2 on 
windows. Thanks.

> Support scoreMode parameter for BlockJoinParentQParser
> --
>
> Key: SOLR-5882
> URL: https://issues.apache.org/jira/browse/SOLR-5882
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 4.8
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-5882.patch
>
>
> Today BlockJoinParentQParser creates queries with hardcoded _scoring mode_ 
> "None": 
> {code:borderStyle=solid}
>   protected Query createQuery(Query parentList, Query query) {
> return new ToParentBlockJoinQuery(query, getFilter(parentList), 
> ScoreMode.None);
>   }
> {code} 
> Analogically BlockJoinChildQParser creates queries with hardcoded _doScores_ 
> "false":
> {code:borderStyle=solid}
>   protected Query createQuery(Query parentListQuery, Query query) {
> return new ToChildBlockJoinQuery(query, getFilter(parentListQuery), 
> false);
>   }
> {code}
> I propose to have ability to configure this scoring options via query syntax.
> Syntax for parent queries can be like:
> {code:borderStyle=solid}
> {!parent which=type:parent scoreMode=None|Avg|Max|Total}
> {code} 
> For child query:
> {code:borderStyle=solid}
> {!child of=type:parent doScores=true|false}
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-14 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246393#comment-14246393
 ] 

Noble Paul edited comment on SOLR-6559 at 12/15/14 7:17 AM:


The patch does not apply properly. So , so could not review it properly.

Does it support wild cards yet?  I don't see any tests yet

The default cases should work fine . the default will have {{split=$ROOT&f=/**}}


was (Author: noble.paul):
The patch does not apply properly. So , so could not review it properly.

Does it support wild cards yet?  I don't see any tests yet

The default cases should work fine the defaly will have {{split=/&f=/**}}

> Create an endpoint /update/xml/docs endpoint to do custom xml indexing
> --
>
> Key: SOLR-6559
> URL: https://issues.apache.org/jira/browse/SOLR-6559
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6559.patch, SOLR-6559.patch, SOLR-6559.patch
>
>
> Just the way we have an json end point create an xml end point too. use the 
> XPathRecordReader in DIH to do the same . The syntax would require slight 
> tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-14 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246393#comment-14246393
 ] 

Noble Paul commented on SOLR-6559:
--

The patch does not apply properly. So , so could not review it properly.

Does it support wild cards yet?  I don't see any tests yet

The default cases should work fine the defaly will have {{split=/&f=/**}}

> Create an endpoint /update/xml/docs endpoint to do custom xml indexing
> --
>
> Key: SOLR-6559
> URL: https://issues.apache.org/jira/browse/SOLR-6559
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6559.patch, SOLR-6559.patch, SOLR-6559.patch
>
>
> Just the way we have an json end point create an xml end point too. use the 
> XPathRecordReader in DIH to do the same . The syntax would require slight 
> tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 703 - Still Failing

2014-12-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/703/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDistribSearch

Error Message:
Error executing query

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Error executing query
at 
__randomizedtesting.SeedInfo.seed([A144E1B12E2C4605:20A26FA959732639]:0)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:100)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:223)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrServer(FullSolrCloudDistribCmdsTest.java:414)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.doTest(FullSolrCloudDistribCmdsTest.java:144)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.luc

[jira] [Updated] (SOLR-6850) AutoAddReplicas does not wait enough for a replica to get live

2014-12-14 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6850:

Attachment: SOLR-6850.patch

Whoops I created the previous patch too fast.

ClusterStateUtil.waitToSeeLive() has a timeoutInMs param. So keeping that 
consistent and OverseerAutoReplicaFailoverThread.addReplica calls it correctly.

> AutoAddReplicas does not wait enough for a replica to get live
> --
>
> Key: SOLR-6850
> URL: https://issues.apache.org/jira/browse/SOLR-6850
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 4.10.1, 4.10.2, 5.0, Trunk
>Reporter: Varun Thacker
> Attachments: SOLR-6850.patch, SOLR-6850.patch
>
>
> After we have detected that a replica needs failing over, we add a replica 
> and wait to see if it's live.
> Currently we only wait for 30ms , but I think the intention here was to wait 
> for 30s.
> In CloudStateUtil.waitToSeeLive() the conversion should have been 
> {{System.nanoTime() + TimeUnit.NANOSECONDS.convert(timeoutInMs, 
> TimeUnit.SECONDS);}} instead of {{System.nanoTime() + 
> TimeUnit.NANOSECONDS.convert(timeoutInMs, TimeUnit.MILLISECONDS);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6850) AutoAddReplicas does not wait enough for a replica to get live

2014-12-14 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6850:

Attachment: SOLR-6850.patch

Simple patch.

> AutoAddReplicas does not wait enough for a replica to get live
> --
>
> Key: SOLR-6850
> URL: https://issues.apache.org/jira/browse/SOLR-6850
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 4.10.1, 4.10.2, 5.0, Trunk
>Reporter: Varun Thacker
> Attachments: SOLR-6850.patch
>
>
> After we have detected that a replica needs failing over, we add a replica 
> and wait to see if it's live.
> Currently we only wait for 30ms , but I think the intention here was to wait 
> for 30s.
> In CloudStateUtil.waitToSeeLive() the conversion should have been 
> {{System.nanoTime() + TimeUnit.NANOSECONDS.convert(timeoutInMs, 
> TimeUnit.SECONDS);}} instead of {{System.nanoTime() + 
> TimeUnit.NANOSECONDS.convert(timeoutInMs, TimeUnit.MILLISECONDS);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6850) AutoAddReplicas does not wait enough for a replica to get live

2014-12-14 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-6850:
---

 Summary: AutoAddReplicas does not wait enough for a replica to get 
live
 Key: SOLR-6850
 URL: https://issues.apache.org/jira/browse/SOLR-6850
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2, 4.10.1, 4.10, 5.0, Trunk
Reporter: Varun Thacker


After we have detected that a replica needs failing over, we add a replica and 
wait to see if it's live.

Currently we only wait for 30ms , but I think the intention here was to wait 
for 30s.

In CloudStateUtil.waitToSeeLive() the conversion should have been 
{{System.nanoTime() + TimeUnit.NANOSECONDS.convert(timeoutInMs, 
TimeUnit.SECONDS);}} instead of {{System.nanoTime() + 
TimeUnit.NANOSECONDS.convert(timeoutInMs, TimeUnit.MILLISECONDS);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5660) Send request level commitWithin as a param rather than setting it per doc

2014-12-14 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5660:
---
Attachment: SOLR-5660.patch

Added handling commitWithin as parameter - its passed on request level, in case 
not present its passed per document.
Added test for request level, but failed to reproduce commitWithin per document 
(org.apache.solr.cloud.FullSolrCloudDistribCmdsTest#testIndexingCommitWithinOnAttr).
 
If 2documents contain different commitWithin value its failed with exception:
{quote}org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
Illegal to have multiple roots (start tag in epilog?).
 at [row,col {unknown-source}]: [1,236]
at 
__randomizedtesting.SeedInfo.seed([FC019F99FE2DEADF:7DE7118189728AE3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingCommitWithinOnAttr(FullSolrCloudDistribCmdsTest.java:183)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.doTest(FullSolrCloudDistribCmdsTest.java:143)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618){quote}

Is it bug due to malformed XML?
If one document contains commitWithin passed its does not taken into account 
(commitWithin=-1). Seems this value unmarshalled incorrectly in 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec#unmarshal


> Send request level commitWithin as a param rather than setting it per doc
> -
>
> Key: SOLR-5660
> URL: https://issues.apache.org/jira/browse/SOLR-5660
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers, SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-5660.patch
>
>
> In SolrCloud the commitWithin parameter is sent per-document even if it is 
> set on the entire request.
> We should send request level commitWithin as a param rather than setting it 
> per doc - that would mean less repeated data in the request. We still need to 
> properly support per doc like this as well though, because that is the level 
> cmd objects support and we are distributing cmd objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6840) Remove legacy solr.xml mode

2014-12-14 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246302#comment-14246302
 ] 

Erick Erickson commented on SOLR-6840:
--

OK, we need to decide how we'll approach persistence in particular.

1> There's no persist=false in the new solr.xml, ConfigSolrXml.isPersistent 
always returns true. Or I'm missing something again. 75 tests fail since they 
try to write out to the solr test files tree and the framework (rightly) barfs 
(see <2> below). This assumes that I do the trick in <2> of my _previous_ 
comment.  Any suggestions?

2> You can get around <1> by copying all the config files to a temp dir and 
making solr_home point there. For many, many tests. This means that a zillion 
files get copied all over the place. This fixes 75 test failures.

So, in general what's the story with dealing with persistence in the new 
format, particularly in the test world? I really don't want to re-introduce the 
whole horrible persistence stuff [~romseygeek] I'd be particularly 
interested in your take. Is there any real reason for 
ConfigSolrXml.isPersistent to return true? 

And also note that there are a bunch of places that really need some attention. 
I blindly removed all the , 
which is useless and ought to be dealt with one-by-one.

> Remove legacy solr.xml mode
> ---
>
> Key: SOLR-6840
> URL: https://issues.apache.org/jira/browse/SOLR-6840
> Project: Solr
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6840.patch
>
>
> On the [Solr Cores and solr.xml 
> page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
>  the Solr Reference Guide says:
> {quote}
> Starting in Solr 4.3, Solr will maintain two distinct formats for 
> {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
> have become accustomed to in which all of the cores one wishes to define in a 
> Solr instance are defined in {{solr.xml}} in 
> {{...}} tags. This format will continue to be 
> supported through the entire 4.x code line.
> As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
> Solr will support _core discovery_. [...]
> The new "core discovery mode" structure for solr.xml will become mandatory as 
> of Solr 5.0, see: Format of solr.xml.
> {quote}
> AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
> trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #787: POMs out of sync

2014-12-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/787/

3 tests failed.
FAILED:  
org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest.org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([962C863D1686C358]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:105)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.testPathParts

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([B5B140E180CB69EB]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.org.apache.solr.hadoop.MorphlineBasicMiniMRTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([B5B140E180CB69EB]:0)




Build Log:
[...truncated 53856 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:552: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:204: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 398 minutes 41 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1995 - Failure!

2014-12-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1995/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: false)

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
reload the collection time out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: reload 
the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([73CED626CBB727EE:F228583EBCE847D2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.CollectionAdminRequest.process(CollectionAdminRequest.java:379)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testSolrJAPICalls(CollectionsAPIDistributedZkTest.java:332)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:201)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:

[jira] [Resolved] (LUCENE-6112) Compile error with FST package example code

2014-12-14 Thread Koji Sekiguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Sekiguchi resolved LUCENE-6112.

   Resolution: Fixed
Fix Version/s: Trunk
   5.0

Thanks, Uchida-san!

> Compile error with FST package example code
> ---
>
> Key: LUCENE-6112
> URL: https://issues.apache.org/jira/browse/LUCENE-6112
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/FSTs
>Affects Versions: 4.10.2
>Reporter: Tomoko Uchida
>Assignee: Koji Sekiguchi
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6112.patch
>
>
> I run the FST construction example guided package.html with lucene 4.10, and 
> found a compile error.
> http://lucene.apache.org/core/4_10_2/core/index.html?org/apache/lucene/util/fst/package-summary.html
> javac claimed as below.
> "FSTTest" is my test class, just copied from javadoc's example.
> {code}
> $ javac -cp /opt/lucene-4.10.2/core/lucene-core-4.10.2.jar FSTTest.java 
> FSTTest.java:28: error: method toIntsRef in class Util cannot be applied to 
> given types;
>   builder.add(Util.toIntsRef(scratchBytes, scratchInts), outputValues[i]);
>   ^
>   required: BytesRef,IntsRefBuilder
>   found: BytesRef,IntsRef
>   reason: actual argument IntsRef cannot be converted to IntsRefBuilder by 
> method invocation conversion
> Note: FSTTest.java uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> 1 error
> {code}
> I modified scratchInts variable type from IntsRef to IntsRefBuilder, it 
> worked fine. (I checked o.a.l.u.fst.TestFSTs.java TestCase and my 
> modification seems to be correct.)
> Util.toIntsRef() method takes IntsRefBuilder as 2nd argument instead of 
> IntsRef since 4.10, so Javadocs also should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6112) Compile error with FST package example code

2014-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246240#comment-14246240
 ] 

ASF subversion and git services commented on LUCENE-6112:
-

Commit 1645549 from [~okoji091] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645549 ]

LUCENE-6112: Fix compile error in FST package example code

> Compile error with FST package example code
> ---
>
> Key: LUCENE-6112
> URL: https://issues.apache.org/jira/browse/LUCENE-6112
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/FSTs
>Affects Versions: 4.10.2
>Reporter: Tomoko Uchida
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: LUCENE-6112.patch
>
>
> I run the FST construction example guided package.html with lucene 4.10, and 
> found a compile error.
> http://lucene.apache.org/core/4_10_2/core/index.html?org/apache/lucene/util/fst/package-summary.html
> javac claimed as below.
> "FSTTest" is my test class, just copied from javadoc's example.
> {code}
> $ javac -cp /opt/lucene-4.10.2/core/lucene-core-4.10.2.jar FSTTest.java 
> FSTTest.java:28: error: method toIntsRef in class Util cannot be applied to 
> given types;
>   builder.add(Util.toIntsRef(scratchBytes, scratchInts), outputValues[i]);
>   ^
>   required: BytesRef,IntsRefBuilder
>   found: BytesRef,IntsRef
>   reason: actual argument IntsRef cannot be converted to IntsRefBuilder by 
> method invocation conversion
> Note: FSTTest.java uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> 1 error
> {code}
> I modified scratchInts variable type from IntsRef to IntsRefBuilder, it 
> worked fine. (I checked o.a.l.u.fst.TestFSTs.java TestCase and my 
> modification seems to be correct.)
> Util.toIntsRef() method takes IntsRefBuilder as 2nd argument instead of 
> IntsRef since 4.10, so Javadocs also should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6112) Compile error with FST package example code

2014-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246238#comment-14246238
 ] 

ASF subversion and git services commented on LUCENE-6112:
-

Commit 1645548 from [~okoji091] in branch 'dev/trunk'
[ https://svn.apache.org/r1645548 ]

LUCENE-6112: Fix compile error in FST package example code

> Compile error with FST package example code
> ---
>
> Key: LUCENE-6112
> URL: https://issues.apache.org/jira/browse/LUCENE-6112
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/FSTs
>Affects Versions: 4.10.2
>Reporter: Tomoko Uchida
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: LUCENE-6112.patch
>
>
> I run the FST construction example guided package.html with lucene 4.10, and 
> found a compile error.
> http://lucene.apache.org/core/4_10_2/core/index.html?org/apache/lucene/util/fst/package-summary.html
> javac claimed as below.
> "FSTTest" is my test class, just copied from javadoc's example.
> {code}
> $ javac -cp /opt/lucene-4.10.2/core/lucene-core-4.10.2.jar FSTTest.java 
> FSTTest.java:28: error: method toIntsRef in class Util cannot be applied to 
> given types;
>   builder.add(Util.toIntsRef(scratchBytes, scratchInts), outputValues[i]);
>   ^
>   required: BytesRef,IntsRefBuilder
>   found: BytesRef,IntsRef
>   reason: actual argument IntsRef cannot be converted to IntsRefBuilder by 
> method invocation conversion
> Note: FSTTest.java uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> 1 error
> {code}
> I modified scratchInts variable type from IntsRef to IntsRefBuilder, it 
> worked fine. (I checked o.a.l.u.fst.TestFSTs.java TestCase and my 
> modification seems to be correct.)
> Util.toIntsRef() method takes IntsRefBuilder as 2nd argument instead of 
> IntsRef since 4.10, so Javadocs also should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6581) Prepare CollapsingQParserPlugin and ExpandComponent for 5.0

2014-12-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Description: 
*Background*

The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

There are some major advantages of using the MultiDocValues rather then a top 
level FieldCache. But the lookup from docId to top-level ordinals is slower 
using MultiDocValues.

My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
to use MultiDocValues, the performance drop is around 100%.  For some use cases 
this performance drop is a blocker.

*What About Faceting?*

String faceting also relies on the top level ordinals. Is faceting performance 
effected also? My testing has shown that the faceting performance is effected 
much less then collapsing. 

One possible reason for this is that field collapsing is memory bound and 
faceting is not. So the additional memory accesses needed for MultiDocValues 
effects field collapsing much more the faceting.

*Proposed Solution*

The proposed solution is to have the default Collapse and Expand algorithm us 
MultiDocValues, but to provide an option to use a top level FieldCache if the 
performance of MultiDocValues is a blocker.

The proposed mechanism for switching to the FieldCache would be a new "hint" 
parameter. If the hint parameter is set to "FAST_QUERY" then the top-level 
FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=FAST_QUERY}
{code}






 







 






  was:
*Background*

The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

There are some major advantages of using the MultiDocValues rather then a top 
level FieldCache. But the lookup from docId to top-level ordinals is slower 
using MultiDocValues.

My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
to use MultiDocValues, the performance drop is around 100%.  For some use cases 
this performance drop is a blocker.

*What About Faceting?*

String faceting also relies on the top level ordinals. Is faceting performance 
effected also? My testing has shown that the faceting performance is effected 
much less then collapsing. 

One possible reason for this is that field collapsing is memory bound and 
faceting is not. So the additional memory accesses needed for MultiDocValues 
effects field collapsing much more the faceting.

*Proposed Solution*

The proposed solution is to have the default Collapse and Expand algorithm us 
MultiDocValues, but to provide an option to use a top level FieldCache if the 
performance of MultiDocValues is a blocker.

The proposed mechanism for switching to the FieldCache would be a new "hint" 
parameter. If the hint parameter is set to "FAST_QUERY" then the top-level 
FieldCache would be used for both Collapse and Expand.

Example syntax:

fq={!collapse field=x hint=FAST_QUERY}







 







 







> Prepare CollapsingQParserPlugin and ExpandComponent for 5.0
> ---
>
> Key: SOLR-6581
> URL: https://issues.apache.org/jira/browse/SOLR-6581
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6581.patch, SOLR-6581.patch
>
>
> *Background*
> The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
> are optimized to work with a top level FieldCache. Top level FieldCaches have 
> a very fast docID to top-level ordinal lookup. Fast access to the top-level 
> ordinals allows for very high performance field collapsing on high 
> cardinality fields. 
> LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
> FieldCache is no longer in regular use. Instead all top level caches are 
> accessed through MultiDocValues. 
> There are some major advantages of using the MultiDocValues rather then a top 
> leve

[jira] [Updated] (SOLR-6581) Prepare CollapsingQParserPlugin and ExpandComponent for 5.0

2014-12-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Description: 
*Background*

The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

There are some major advantages of using the MultiDocValues rather then a top 
level FieldCache. But the lookup from docId to top-level ordinals is slower 
using MultiDocValues.

My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
to use MultiDocValues, the performance drop is around 100%.  For some use cases 
this performance drop is a blocker.

*What About Faceting?*

String faceting also relies on the top level ordinals. Is faceting performance 
effected also? My testing has shown that the faceting performance is effected 
much less then collapsing. 

One possible reason for this is that field collapsing is memory bound and 
faceting is not. So the additional memory accesses needed for MultiDocValues 
effects field collapsing much more the faceting.

*Proposed Solution*

The proposed solution is to have the default Collapse and Expand algorithm us 
MultiDocValues, but to provide an option to use a top level FieldCache if the 
performance of MultiDocValues is a blocker.

The proposed mechanism for switching to the FieldCache would be a new "hint" 
parameter. If the hint parameter is set to "FAST_QUERY" then the top-level 
FieldCache would be used for both Collapse and Expand.

Example syntax:

fq={!collapse field=x hint=FAST_QUERY}







 







 






  was:
There were changes made to the CollapsingQParserPlugin and ExpandComponent in 
the 5x branch that were driven by changes to the Lucene Collectors API and 
DocValues API. This ticket is to review the 5x implementation and make any 
changes necessary in preparation for a 5.0 release.




> Prepare CollapsingQParserPlugin and ExpandComponent for 5.0
> ---
>
> Key: SOLR-6581
> URL: https://issues.apache.org/jira/browse/SOLR-6581
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6581.patch, SOLR-6581.patch
>
>
> *Background*
> The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
> are optimized to work with a top level FieldCache. Top level FieldCaches have 
> a very fast docID to top-level ordinal lookup. Fast access to the top-level 
> ordinals allows for very high performance field collapsing on high 
> cardinality fields. 
> LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
> FieldCache is no longer in regular use. Instead all top level caches are 
> accessed through MultiDocValues. 
> There are some major advantages of using the MultiDocValues rather then a top 
> level FieldCache. But the lookup from docId to top-level ordinals is slower 
> using MultiDocValues.
> My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
> to use MultiDocValues, the performance drop is around 100%.  For some use 
> cases this performance drop is a blocker.
> *What About Faceting?*
> String faceting also relies on the top level ordinals. Is faceting 
> performance effected also? My testing has shown that the faceting performance 
> is effected much less then collapsing. 
> One possible reason for this is that field collapsing is memory bound and 
> faceting is not. So the additional memory accesses needed for MultiDocValues 
> effects field collapsing much more the faceting.
> *Proposed Solution*
> The proposed solution is to have the default Collapse and Expand algorithm us 
> MultiDocValues, but to provide an option to use a top level FieldCache if the 
> performance of MultiDocValues is a blocker.
> The proposed mechanism for switching to the FieldCache would be a new "hint" 
> parameter. If the hint parameter is set to "FAST_QUERY" then the top-level 
> FieldCache would be used for both Collapse and Expand.
> Example syntax:
> fq={!collapse field=x hint=FAST_QUERY}
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6840) Remove legacy solr.xml mode

2014-12-14 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6840:
-
Attachment: SOLR-6840.patch

OK, the bare beginnings of the patch. What it does:

1> removes _all_ of the references I could find  to any  Tries to deal with SolrTestCaseJ4 and TestHarness. TestHarness used to have 
a hard-coded  removes anything like solr*old*.xml from the source tree.

Gets 78 more tests to pass. There are only 158 failing now. It started with 236 
failing. So it's at least progress. Sounds worse than it is I suspect.

So I'm putting it up for comments on the approach, particularly the new 
core.properties, SolrTestCaseJ4 and TestHarness (not, I really haven't worked 
on this last).

And I'm also wondering if there's any good way to split this work up. There's 
going to be a LOT of tedious work to get all the tests to pass. But reconciling 
multiple people's patches will be an absolute pain. Maybe if people work on 
bits just put up patches (appropriately labeled) that just include the delta 
(or as close as one can get easily) and I'll reconcile and keep the uber-patch 
as current as I can? I may be able to look at TestHarness a bit more tonight...

> Remove legacy solr.xml mode
> ---
>
> Key: SOLR-6840
> URL: https://issues.apache.org/jira/browse/SOLR-6840
> Project: Solr
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6840.patch
>
>
> On the [Solr Cores and solr.xml 
> page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
>  the Solr Reference Guide says:
> {quote}
> Starting in Solr 4.3, Solr will maintain two distinct formats for 
> {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
> have become accustomed to in which all of the cores one wishes to define in a 
> Solr instance are defined in {{solr.xml}} in 
> {{...}} tags. This format will continue to be 
> supported through the entire 4.x code line.
> As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
> Solr will support _core discovery_. [...]
> The new "core discovery mode" structure for solr.xml will become mandatory as 
> of Solr 5.0, see: Format of solr.xml.
> {quote}
> AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
> trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-14 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-6849:

Attachment: SOLR-6849.patch

Patch, adding a 'remoteHost' parameter to the RemoteSolrException constructor.

> RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
> remote host
> -
>
> Key: SOLR-6849
> URL: https://issues.apache.org/jira/browse/SOLR-6849
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6849.patch
>
>
> All very well telling me there was an error on a remote host, but it's 
> difficult to work out what's wrong if it doesn't tell me *which* host the 
> error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6813) distrib.singlePass does not work for expand-request - start/rows included

2014-12-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-6813:


Assignee: Joel Bernstein

> distrib.singlePass does not work for expand-request - start/rows included
> -
>
> Key: SOLR-6813
> URL: https://issues.apache.org/jira/browse/SOLR-6813
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, search
>Reporter: Per Steffensen
>Assignee: Joel Bernstein
>  Labels: distributed_search, search
> Attachments: test_that_reveals_the_problem.patch
>
>
> Using distrib.singlePass does not work for expand-requests. Even after the 
> fix provided to SOLR-6812, it does not work for requests where you add start 
> and/or rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6813) distrib.singlePass does not work for expand-request - start/rows included

2014-12-14 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246182#comment-14246182
 ] 

Joel Bernstein commented on SOLR-6813:
--

Ok, I'll take a look. 

> distrib.singlePass does not work for expand-request - start/rows included
> -
>
> Key: SOLR-6813
> URL: https://issues.apache.org/jira/browse/SOLR-6813
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, search
>Reporter: Per Steffensen
>  Labels: distributed_search, search
> Attachments: test_that_reveals_the_problem.patch
>
>
> Using distrib.singlePass does not work for expand-requests. Even after the 
> fix provided to SOLR-6812, it does not work for requests where you add start 
> and/or rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-14 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-6849:
---

 Summary: RemoteSolrExceptions thrown from HttpSolrServer should 
include the URL of the remote host
 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk


All very well telling me there was an error on a remote host, but it's 
difficult to work out what's wrong if it doesn't tell me *which* host the error 
was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2014-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246176#comment-14246176
 ] 

ASF subversion and git services commented on LUCENE-2878:
-

Commit 1645535 from [~romseygeek] in branch 'dev/branches/lucene2878'
[ https://svn.apache.org/r1645535 ]

LUCENE-2878: precommit cleanups

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Robert Muir
>  Labels: gsoc2014
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878-vs-trunk.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, 
> PosHighlighter.patch, PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2014-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246143#comment-14246143
 ] 

ASF subversion and git services commented on LUCENE-2878:
-

Commit 1645528 from [~romseygeek] in branch 'dev/branches/lucene2878'
[ https://svn.apache.org/r1645528 ]

LUCENE-2878: last nocommits

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Robert Muir
>  Labels: gsoc2014
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878-vs-trunk.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, 
> PosHighlighter.patch, PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2014-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246133#comment-14246133
 ] 

ASF subversion and git services commented on LUCENE-2878:
-

Commit 1645525 from [~romseygeek] in branch 'dev/branches/lucene2878'
[ https://svn.apache.org/r1645525 ]

LUCENE-2878: Scoring on positionfilterqueries

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Robert Muir
>  Labels: gsoc2014
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878-vs-trunk.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, 
> PosHighlighter.patch, PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6832) Queries be served locally rather than being forwarded to another replica

2014-12-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246130#comment-14246130
 ] 

Shawn Heisey commented on SOLR-6832:


A slightly better choice might be preferLocalReplicas ... but Shards is pretty 
good too.

> Queries be served locally rather than being forwarded to another replica
> 
>
> Key: SOLR-6832
> URL: https://issues.apache.org/jira/browse/SOLR-6832
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
>Reporter: Sachin Goyal
>
> Currently, I see that code flow for a query in SolrCloud is as follows:
> For distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> HttpShardHandler.submit()
> For non-distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> QueryComponent.process()
> \\
> \\
> \\
> For a distributed query, the request is always sent to all the shards even if 
> the originating SolrCore (handling the original distributed query) is a 
> replica of one of the shards.
> If the original Solr-Core can check itself before sending http requests for 
> any shard, we can probably save some network hopping and gain some 
> performance.
> \\
> \\
> We can change SearchHandler.handleRequestBody() or HttpShardHandler.submit() 
> to fix this behavior (most likely the former and not the latter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6833) bin/solr -e foo should not use server/solr as the SOLR_HOME

2014-12-14 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246007#comment-14246007
 ] 

Alexandre Rafalovitch commented on SOLR-6833:
-

Looks good. I hit a script bombing out on mis-spelt name, so opened a SOLR-6848 
for that.

The rest here are just random thoughts from using the scripts, feel free to 
ignore if this has already been discussed.

# The example run will generate console.log, even though log4j for example does 
not send anything to it. So the file is always empty.
# There is also solr_gc.log for the examples, I am not sure if that's a big 
deal. It grows kind of fast, but so does the main log.
# In the script, if the port is busy, is says:
{quote}
Port 8983 is already being used by another process (pid: 13221)
{quote}
Could be even more - first user - nicer to add after: _Use -p flag to start 
with a different port_.

> bin/solr -e foo should not use server/solr as the SOLR_HOME
> ---
>
> Key: SOLR-6833
> URL: https://issues.apache.org/jira/browse/SOLR-6833
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6833.patch
>
>
> i think it's weird right now that running bin/solr with the "-e" (example) 
> option causes it to create example solr instances inside the server directory.
> i think that's fine for running solr "normally" (ie: "start") but if you use 
> "-e" that seems like the solr.solr.home for those example should instead be 
> created under $SOLR_TIP/example.
> I would even go so far as to suggest that the *log* files created should live 
> in that directory as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6848) Typing incorrect configuration name for cloud example aborts the script, leaves nodes running

2014-12-14 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-6848:
---

 Summary: Typing incorrect configuration name for cloud example 
aborts the script, leaves nodes running
 Key: SOLR-6848
 URL: https://issues.apache.org/jira/browse/SOLR-6848
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Priority: Minor


Please choose a configuration for the gettingstarted collection, available 
options are:
*basic_configs*, data_driven_schema_configs, or sample_techproducts_configs 
\[data_ddriven_schema_configs\] *basic_config*
Connecting to ZooKeeper at localhost:9983
Exception in thread "main" java.io.FileNotFoundException: Specified config 
basic_config not found in .../solr-5.0.0-SNAPSHOT/server/solr/configsets
at 
org.apache.solr.util.SolrCLI$CreateCollectionTool.runCloudTool(SolrCLI.java:867)
at 
org.apache.solr.util.SolrCLI$CreateCollectionTool.runTool(SolrCLI.java:824)
at org.apache.solr.util.SolrCLI.main(SolrCLI.java:185)


SolrCloud example running, please visit http://localhost:8983/solr 
$




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-14 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245959#comment-14245959
 ] 

Alexandre Rafalovitch commented on SOLR-4792:
-

The top level README.txt (in 5.x) still has the following text:
{quote}
dist/solr-XX.war
  The Apache Solr Application.  Deploy this WAR file to any servlet
  container to run Apache Solr.
{quote}

Should probably be deleted or at least pointed at the new location for WAR file 
with warning it is NOT official.

> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6748) Additional resources to the site to help new Solr users ramp up quicker

2014-12-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245929#comment-14245929
 ] 

Steve Rowe commented on SOLR-6748:
--

bq. Being a book, can I come back and submit it to be added to the website with 
the others?

Please do.

> Additional resources to the site to help new Solr users ramp up quicker
> ---
>
> Key: SOLR-6748
> URL: https://issues.apache.org/jira/browse/SOLR-6748
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Xavier Morera
>
> I would like to request the addition of an online training I created for 
> Pluralsight called *Getting Started with Enterprise Search using Apache Solr* 
> in the following page: http://lucene.apache.org/solr/resources.html
> It is not exactly a video only, it is an online training so no idea if it 
> should be added beneath videos or separately.
> It aims to take a developer with absolutely no knowledge of Solr or even 
> search engines, to take them into being able to create a basic POC style 
> application with Solr in the backend. A few thousand people have watched it 
> and I have received very positive feedback on how it has helped people get 
> started very quickly and reduce the entry level barrier.  
> Is this possible? The url of the training is:
> http://www.pluralsight.com/courses/table-of-contents/enterprise-search-using-apache-solr
> I believe it will help a lot of people get started quicker.
> Here is the full story of how this training came to be:
> A while back I was a Solr total rookie, but I knew I needed it for one of my 
> projects. I had a little bit of a hard time getting started, but I did after 
> a lot of hard work and working with other pretty good Solr developers.
> I then worked and created a system which is doing pretty good now. But I 
> decided that I wanted to create a resource that will help people with 
> absolutely no knowledge of Solr or search engines get started as quickly as 
> possible. And given that I am already a trainer/author at Pluralsight, who 
> focused mainly on Agile development, I thought this was the right place to 
> start helping others.
> And so I did. I have received positive feedback, and given my background as a 
> trainer I have also given it as "Solr for the Uninitiated", also for people 
> with no previous knowledge of Solr. 
> It has also been received well to the extent that I have been hired to make 
> it into a book, which I am writing at the moment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6847) LeaderInitiatedRecoveryThread compares wrong replica's state with lirState

2014-12-14 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6847:
---

 Summary: LeaderInitiatedRecoveryThread compares wrong replica's 
state with lirState
 Key: SOLR-6847
 URL: https://issues.apache.org/jira/browse/SOLR-6847
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.2
Reporter: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, Trunk


LeaderInitiatedRecoveryThread looks at a random replica to figure out if it 
should re-publish LIR state to "down". It does however publish the LIR state 
for the correct replica.

The bug has always been there. The thread used ZkStateReader.getReplicaProps 
method with the coreName to find the correct replica. However, the coreName 
parameter in getReplicaProps was un-used and I removed it in SOLR-6240 but I 
didn't find and fix this bug then.

The possible side-effects of this bug would be that we may be republish LIR 
state multiple times and/or in rare cases, cause double 'requestrecovery' to be 
executed on a replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6063) allow core reloading with parameters in core admin

2014-12-14 Thread Elran Dvir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elran Dvir updated SOLR-6063:
-
Attachment: SOLR-6063.patch

> allow  core reloading with parameters in core admin
> ---
>
> Key: SOLR-6063
> URL: https://issues.apache.org/jira/browse/SOLR-6063
> Project: Solr
>  Issue Type: Improvement
>Reporter: Elran Dvir
> Attachments: SOLR-6063.patch, SOLR-6063.patch
>
>
> The patch allows to add parameters to core admin reload command as in create 
> command and it changes the core configuration as indicated in the parameters. 
> Any parameter that is not indicated in the command will be the same as before.
> For example, the command 
> solr/admin/cores?action=RELOAD&core=core0&transient=true
> will change the core to be transient.
> In my patch, I removed the parameter isTransientCore from the method 
> registerCore in class CoreContainer. I chose to use cd.isTransient() instead.
> The patch is based on Solr 4.8  
> Thanks.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6063) allow core reloading with parameters in core admin

2014-12-14 Thread Elran Dvir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245914#comment-14245914
 ] 

Elran Dvir commented on SOLR-6063:
--

Hi all,

I am  attaching a new version of the patch.
When a core is changed from non transient to transient, it has to be closed 
first.
Otherwise, you will get the an exception like this:
REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@f922b95 
(your_core_name) has a reference count of 1.

Thanks.

> allow  core reloading with parameters in core admin
> ---
>
> Key: SOLR-6063
> URL: https://issues.apache.org/jira/browse/SOLR-6063
> Project: Solr
>  Issue Type: Improvement
>Reporter: Elran Dvir
> Attachments: SOLR-6063.patch
>
>
> The patch allows to add parameters to core admin reload command as in create 
> command and it changes the core configuration as indicated in the parameters. 
> Any parameter that is not indicated in the command will be the same as before.
> For example, the command 
> solr/admin/cores?action=RELOAD&core=core0&transient=true
> will change the core to be transient.
> In my patch, I removed the parameter isTransientCore from the method 
> registerCore in class CoreContainer. I chose to use cd.isTransient() instead.
> The patch is based on Solr 4.8  
> Thanks.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4564) Add taxonomy index upgrade utility

2014-12-14 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-4564.

Resolution: Won't Fix

The facet module has been rewritten twice during 4.x, requiring an effective 
rewrite on the application already. Therefore I think it's fair to assume that 
this issue is no longer relevant. I'm closing it and we can reopen if needed.

> Add taxonomy index upgrade utility
> --
>
> Key: LUCENE-4564
> URL: https://issues.apache.org/jira/browse/LUCENE-4564
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/facet
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Blocker
> Fix For: Trunk
>
>
> Currently there's no way for an app to upgrade a taxonomy index to the newest 
> index format. The problem is, that unlike search indexes which may merge 
> segments often, the taxonomy index is not likely to do many merges. At some 
> point, most taxonomies become fixed (i.e. new categories are not/rarely 
> added), and therefore it could be that some old segments will never get 
> merged.
> When we'll release Lucene 5.0, support for 3x indexes will be removed, and so 
> taxonomies that were created w/ 3x won't be read anymore.
> While one can use IndexUpgrader (I think) to upgrade the taxonomy index, it 
> may not be so trivial for users to realize that, as it may not be so evident 
> from DirTaxoWriter/Reader API that there's a regular Lucene index behind the 
> scenes.
> A tool, like TaxonomyUpgraderTool, even if simple and using IndexUpgrader, 
> may make it more convenient for user to upgrade their taxonomy index.
> Opening as a placeholder for 5.0. Also marking as blocker, so we don't forget 
> about it before the release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6112) Compile error with FST package example code

2014-12-14 Thread Koji Sekiguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Sekiguchi reassigned LUCENE-6112:
--

Assignee: Koji Sekiguchi

> Compile error with FST package example code
> ---
>
> Key: LUCENE-6112
> URL: https://issues.apache.org/jira/browse/LUCENE-6112
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/FSTs
>Affects Versions: 4.10.2
>Reporter: Tomoko Uchida
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: LUCENE-6112.patch
>
>
> I run the FST construction example guided package.html with lucene 4.10, and 
> found a compile error.
> http://lucene.apache.org/core/4_10_2/core/index.html?org/apache/lucene/util/fst/package-summary.html
> javac claimed as below.
> "FSTTest" is my test class, just copied from javadoc's example.
> {code}
> $ javac -cp /opt/lucene-4.10.2/core/lucene-core-4.10.2.jar FSTTest.java 
> FSTTest.java:28: error: method toIntsRef in class Util cannot be applied to 
> given types;
>   builder.add(Util.toIntsRef(scratchBytes, scratchInts), outputValues[i]);
>   ^
>   required: BytesRef,IntsRefBuilder
>   found: BytesRef,IntsRef
>   reason: actual argument IntsRef cannot be converted to IntsRefBuilder by 
> method invocation conversion
> Note: FSTTest.java uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> 1 error
> {code}
> I modified scratchInts variable type from IntsRef to IntsRefBuilder, it 
> worked fine. (I checked o.a.l.u.fst.TestFSTs.java TestCase and my 
> modification seems to be correct.)
> Util.toIntsRef() method takes IntsRefBuilder as 2nd argument instead of 
> IntsRef since 4.10, so Javadocs also should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6846) deadlock in UninvertedField#getUninvertedField()

2014-12-14 Thread Avishai Ish-Shalom (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Avishai Ish-Shalom updated SOLR-6846:
-
Attachment: (was: solr-uninvertedfield-cache.patch)

> deadlock in UninvertedField#getUninvertedField()
> 
>
> Key: SOLR-6846
> URL: https://issues.apache.org/jira/browse/SOLR-6846
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.2
>Reporter: Avishai Ish-Shalom
> Attachments: SOLR-6846.patch
>
>
> Multiple concurrent calls to UninvertedField#getUninvertedField may deadlock: 
> if a call gets to {{cache.wait()}} before another thread gets to the 
> synchronized block around {{cache.notifyAll()}} code will deadlock because 
> {{cache.wait()}} is synchronized with the same monitor object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6846) deadlock in UninvertedField#getUninvertedField()

2014-12-14 Thread Avishai Ish-Shalom (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Avishai Ish-Shalom updated SOLR-6846:
-
Attachment: SOLR-6846.patch

A patch using a single synchronized block and no .wait() calls. should be free 
of deadlocks.

> deadlock in UninvertedField#getUninvertedField()
> 
>
> Key: SOLR-6846
> URL: https://issues.apache.org/jira/browse/SOLR-6846
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.2
>Reporter: Avishai Ish-Shalom
> Attachments: SOLR-6846.patch
>
>
> Multiple concurrent calls to UninvertedField#getUninvertedField may deadlock: 
> if a call gets to {{cache.wait()}} before another thread gets to the 
> synchronized block around {{cache.notifyAll()}} code will deadlock because 
> {{cache.wait()}} is synchronized with the same monitor object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-6846) deadlock in UninvertedField#getUninvertedField()

2014-12-14 Thread Avishai Ish-Shalom (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Avishai Ish-Shalom updated SOLR-6846:
-
Comment: was deleted

(was: A patch using a single synchronized block and no .wait() calls. should be 
free of deadlocks.)

> deadlock in UninvertedField#getUninvertedField()
> 
>
> Key: SOLR-6846
> URL: https://issues.apache.org/jira/browse/SOLR-6846
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.2
>Reporter: Avishai Ish-Shalom
> Attachments: SOLR-6846.patch
>
>
> Multiple concurrent calls to UninvertedField#getUninvertedField may deadlock: 
> if a call gets to {{cache.wait()}} before another thread gets to the 
> synchronized block around {{cache.notifyAll()}} code will deadlock because 
> {{cache.wait()}} is synchronized with the same monitor object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6252) A couple of small improvements to UnInvertedField class.

2014-12-14 Thread Avishai Ish-Shalom (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245878#comment-14245878
 ] 

Avishai Ish-Shalom commented on SOLR-6252:
--

done

> A couple of small improvements to UnInvertedField class.
> 
>
> Key: SOLR-6252
> URL: https://issues.apache.org/jira/browse/SOLR-6252
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: Trunk
>Reporter: Vamsee Yarlagadda
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.10, Trunk
>
> Attachments: SOLR-6252-v3.patch, SOLR-6252.patch, SOLR-6252v2.patch, 
> solr-uninvertedfield-cache.patch
>
>
> Looks like UnInvertedField#getUnInvertedField has implemented a bit 
> additional synchronization module rather than what is required, and thereby 
> increasing the complexity.
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/request/UnInvertedField.java#L667
> As pointed out in the above link, as the synchronization is performed on the 
> cache variable(which itself will protect the threads from obtaining access to 
> the cache), we can safely remove all the placeholder flags. As long as 
> cache.get() is in synchronized block, we can simply populate the cache with 
> new entries and other threads will be able to see the changes.
> This change has been introduced in 
> https://issues.apache.org/jira/browse/SOLR-2548 (Multithreaded faceting)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6846) deadlock in UninvertedField#getUninvertedField()

2014-12-14 Thread Avishai Ish-Shalom (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Avishai Ish-Shalom updated SOLR-6846:
-
Attachment: solr-uninvertedfield-cache.patch

A patch using a single synchronized block and no .wait() calls. should be free 
of deadlocks.

> deadlock in UninvertedField#getUninvertedField()
> 
>
> Key: SOLR-6846
> URL: https://issues.apache.org/jira/browse/SOLR-6846
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.2
>Reporter: Avishai Ish-Shalom
> Attachments: solr-uninvertedfield-cache.patch
>
>
> Multiple concurrent calls to UninvertedField#getUninvertedField may deadlock: 
> if a call gets to {{cache.wait()}} before another thread gets to the 
> synchronized block around {{cache.notifyAll()}} code will deadlock because 
> {{cache.wait()}} is synchronized with the same monitor object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6846) deadlock in UninvertedField#getUninvertedField()

2014-12-14 Thread Avishai Ish-Shalom (JIRA)
Avishai Ish-Shalom created SOLR-6846:


 Summary: deadlock in UninvertedField#getUninvertedField()
 Key: SOLR-6846
 URL: https://issues.apache.org/jira/browse/SOLR-6846
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.2
Reporter: Avishai Ish-Shalom


Multiple concurrent calls to UninvertedField#getUninvertedField may deadlock: 
if a call gets to {{cache.wait()}} before another thread gets to the 
synchronized block around {{cache.notifyAll()}} code will deadlock because 
{{cache.wait()}} is synchronized with the same monitor object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org