[jira] [Comment Edited] (LUCENE-6973) Improve TeeSinkTokenFilter

2016-01-14 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101399#comment-15101399
 ] 

Uwe Schindler edited comment on LUCENE-6973 at 1/15/16 7:55 AM:


The FilterFactory for the data detection should also list a locale:
{{this.dateFormat = datePattern != null ? new SimpleDateFormat(datePattern, 
Locale.ROOT) : null;}} is not useful for custom formatted dates. E.g. the root 
locale has no month names in CLDR (which happened in Java 9), only "Month 1" :-)

So The factory should also take a locale (we have that at other places, too).

The default in the filter should better use Locale.ENGLISH, otherwise it will 
likely break with Java 9.


was (Author: thetaphi):
The FilterFactory for the data detection should also list a locale:
{@code this.dateFormat = datePattern != null ? new 
SimpleDateFormat(datePattern, Locale.ROOT) : null;} is not useful for custom 
formatted dates. E.g. the root locale has no month names in CLDR (which 
happened in Java 9), only "Month 1" :-)

So The factory should also take a locale (we have that at other places, too).

The default in the filter should better use Locale.ENGLISH, otherwise it will 
likely break with Java 9.

> Improve TeeSinkTokenFilter
> --
>
> Key: LUCENE-6973
> URL: https://issues.apache.org/jira/browse/LUCENE-6973
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch, 
> LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch
>
>
> {{TeeSinkTokenFilter}} can be improved in several ways, as it's written today:
> The most major one is removing {{SinkFilter}} which just doesn't work and is 
> confusing. E.g., if you set a {{SinkFilter}} which filters tokens, the 
> attributes on the stream such as {{PositionIncrementAttribute}} are not 
> updated. Also, if you update any attribute on the stream, you affect other 
> {{SinkStreams}} ... It's best if we remove this confusing class, and let 
> consumers reuse existing {{TokenFilters}} by chaining them to the sink stream.
> After we do that, we can make all the cached states a single (immutable) 
> list, which is shared between all the sink streams, so we don't need to keep 
> many references around, and also deal with {{WeakReference}}.
> Besides that there are some other minor improvements to the code that will 
> come after we clean up this class.
> From a backwards-compatibility standpoint, I don't think that {{SinkFilter}} 
> is actually used anywhere (since it just ... confusing and doesn't work as 
> expected), and therefore I believe it won't affect anyone. If however someone 
> did implement a {{SinkFilter}}, it should be trivial to convert it to a 
> {{TokenFilter}} and chain it to the {{SinkStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6973) Improve TeeSinkTokenFilter

2016-01-14 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101399#comment-15101399
 ] 

Uwe Schindler commented on LUCENE-6973:
---

The FilterFactory for the data detection should also list a locale:
{@code this.dateFormat = datePattern != null ? new 
SimpleDateFormat(datePattern, Locale.ROOT) : null;} is not useful for custom 
formatted dates. E.g. the root locale has no month names in CLDR (which 
happened in Java 9), only "Month 1" :-)

So The factory should also take a locale (we have that at other places, too).

The default in the filter should better use Locale.ENGLISH, otherwise it will 
likely break with Java 9.

> Improve TeeSinkTokenFilter
> --
>
> Key: LUCENE-6973
> URL: https://issues.apache.org/jira/browse/LUCENE-6973
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch, 
> LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch
>
>
> {{TeeSinkTokenFilter}} can be improved in several ways, as it's written today:
> The most major one is removing {{SinkFilter}} which just doesn't work and is 
> confusing. E.g., if you set a {{SinkFilter}} which filters tokens, the 
> attributes on the stream such as {{PositionIncrementAttribute}} are not 
> updated. Also, if you update any attribute on the stream, you affect other 
> {{SinkStreams}} ... It's best if we remove this confusing class, and let 
> consumers reuse existing {{TokenFilters}} by chaining them to the sink stream.
> After we do that, we can make all the cached states a single (immutable) 
> list, which is shared between all the sink streams, so we don't need to keep 
> many references around, and also deal with {{WeakReference}}.
> Besides that there are some other minor improvements to the code that will 
> come after we clean up this class.
> From a backwards-compatibility standpoint, I don't think that {{SinkFilter}} 
> is actually used anywhere (since it just ... confusing and doesn't work as 
> expected), and therefore I believe it won't affect anyone. If however someone 
> did implement a {{SinkFilter}}, it should be trivial to convert it to a 
> {{TokenFilter}} and chain it to the {{SinkStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6973) Improve TeeSinkTokenFilter

2016-01-14 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101393#comment-15101393
 ] 

Uwe Schindler commented on LUCENE-6973:
---

Oh missed last comment. Yes will do final review. I agree with the removal. 
Maybe make a API hint in CHANGES.txt to direct users to replacements for Sinks.

> Improve TeeSinkTokenFilter
> --
>
> Key: LUCENE-6973
> URL: https://issues.apache.org/jira/browse/LUCENE-6973
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch, 
> LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch
>
>
> {{TeeSinkTokenFilter}} can be improved in several ways, as it's written today:
> The most major one is removing {{SinkFilter}} which just doesn't work and is 
> confusing. E.g., if you set a {{SinkFilter}} which filters tokens, the 
> attributes on the stream such as {{PositionIncrementAttribute}} are not 
> updated. Also, if you update any attribute on the stream, you affect other 
> {{SinkStreams}} ... It's best if we remove this confusing class, and let 
> consumers reuse existing {{TokenFilters}} by chaining them to the sink stream.
> After we do that, we can make all the cached states a single (immutable) 
> list, which is shared between all the sink streams, so we don't need to keep 
> many references around, and also deal with {{WeakReference}}.
> Besides that there are some other minor improvements to the code that will 
> come after we clean up this class.
> From a backwards-compatibility standpoint, I don't think that {{SinkFilter}} 
> is actually used anywhere (since it just ... confusing and doesn't work as 
> expected), and therefore I believe it won't affect anyone. If however someone 
> did implement a {{SinkFilter}}, it should be trivial to convert it to a 
> {{TokenFilter}} and chain it to the {{SinkStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6973) Improve TeeSinkTokenFilter

2016-01-14 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101393#comment-15101393
 ] 

Uwe Schindler edited comment on LUCENE-6973 at 1/15/16 7:51 AM:


Oh missed last comment. Yes will do final review. I agree with the removal. 
Maybe make a API hint in CHANGES.txt to direct users to replacements for 
SinkFilters.


was (Author: thetaphi):
Oh missed last comment. Yes will do final review. I agree with the removal. 
Maybe make a API hint in CHANGES.txt to direct users to replacements for Sinks.

> Improve TeeSinkTokenFilter
> --
>
> Key: LUCENE-6973
> URL: https://issues.apache.org/jira/browse/LUCENE-6973
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch, 
> LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch
>
>
> {{TeeSinkTokenFilter}} can be improved in several ways, as it's written today:
> The most major one is removing {{SinkFilter}} which just doesn't work and is 
> confusing. E.g., if you set a {{SinkFilter}} which filters tokens, the 
> attributes on the stream such as {{PositionIncrementAttribute}} are not 
> updated. Also, if you update any attribute on the stream, you affect other 
> {{SinkStreams}} ... It's best if we remove this confusing class, and let 
> consumers reuse existing {{TokenFilters}} by chaining them to the sink stream.
> After we do that, we can make all the cached states a single (immutable) 
> list, which is shared between all the sink streams, so we don't need to keep 
> many references around, and also deal with {{WeakReference}}.
> Besides that there are some other minor improvements to the code that will 
> come after we clean up this class.
> From a backwards-compatibility standpoint, I don't think that {{SinkFilter}} 
> is actually used anywhere (since it just ... confusing and doesn't work as 
> expected), and therefore I believe it won't affect anyone. If however someone 
> did implement a {{SinkFilter}}, it should be trivial to convert it to a 
> {{TokenFilter}} and chain it to the {{SinkStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6973) Improve TeeSinkTokenFilter

2016-01-14 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101389#comment-15101389
 ] 

Uwe Schindler commented on LUCENE-6973:
---

Yes, the only way to do this. I think we have a similar one already for some 
other date-related stuff.

> Improve TeeSinkTokenFilter
> --
>
> Key: LUCENE-6973
> URL: https://issues.apache.org/jira/browse/LUCENE-6973
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch, 
> LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch
>
>
> {{TeeSinkTokenFilter}} can be improved in several ways, as it's written today:
> The most major one is removing {{SinkFilter}} which just doesn't work and is 
> confusing. E.g., if you set a {{SinkFilter}} which filters tokens, the 
> attributes on the stream such as {{PositionIncrementAttribute}} are not 
> updated. Also, if you update any attribute on the stream, you affect other 
> {{SinkStreams}} ... It's best if we remove this confusing class, and let 
> consumers reuse existing {{TokenFilters}} by chaining them to the sink stream.
> After we do that, we can make all the cached states a single (immutable) 
> list, which is shared between all the sink streams, so we don't need to keep 
> many references around, and also deal with {{WeakReference}}.
> Besides that there are some other minor improvements to the code that will 
> come after we clean up this class.
> From a backwards-compatibility standpoint, I don't think that {{SinkFilter}} 
> is actually used anywhere (since it just ... confusing and doesn't work as 
> expected), and therefore I believe it won't affect anyone. If however someone 
> did implement a {{SinkFilter}}, it should be trivial to convert it to a 
> {{TokenFilter}} and chain it to the {{SinkStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2016-01-14 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101380#comment-15101380
 ] 

Varun Thacker commented on SOLR-8131:
-

Hi Mark,

Sorry I didn't quite follow your comment.

Are you saying that the Admin UI to add a core is broken? I think that didn't 
work out of the box in earlier versions as well.

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
>  Labels: difficulty-easy, impact-high
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8131-schemaless-fix.patch, 
> SOLR-8131-schemaless-fix.patch, SOLR-8131.patch, SOLR-8131.patch, 
> SOLR-8131.patch, SOLR-8131.patch, SOLR-8131.patch, SOLR-8131_5x.patch
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Moving Lucene / Solr from SVN to Git

2016-01-14 Thread Dawid Weiss
Thanks Hoss. On a related note -- I assume we have a consensus about
switching to git? If anybody has problems with the workflow, I (or
others) can help, hopefully. We would need to set a date for the
transition so that I can prepare the repo mirror, etc. I suggest a
week from when we clear how to do the transition with Infra?

Dawid

On Fri, Jan 15, 2016 at 6:51 AM, Chris Hostetter
 wrote:
>
> : INFRA-11056: Migrate Lucene project from SVN to Git.
> : https://issues.apache.org/jira/browse/INFRA-11056
>
> Mark: Infra uses non standard Jira workflows, and the last infra
> action was to toggle the state to "Waiting for user" -- Dawid & Uwe
> posted followup questions for Infra in that issue, but I believe:
>
> a) it won't get on anyones radar until it's toggled back to "WaitingforInfra"
> b) only you (as the reporter) are shown the button to do that toggling
>
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6976) BytesTermAttributeImpl.copyTo NPEs when the BytesRef is null

2016-01-14 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-6976:
-
Attachment: LUCENE_6976.patch

Updated patch removing toString() impl and adjusted test to not call it.  

> BytesTermAttributeImpl.copyTo NPEs when the BytesRef is null
> 
>
> Key: LUCENE-6976
> URL: https://issues.apache.org/jira/browse/LUCENE-6976
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_6976.patch, LUCENE_6976.patch
>
>
> The BytesTermAttributeImpl class, not used much I think, has a problem in its 
> copyTo method in which it assumes "bytes" isn't null since it calls 
> BytesRef.deepCopyOf on it.  Perhaps deepCopyOf should support null?  And 
> also, toString(), equals() and hashCode() aren't implemented but we can do so.
> This was discovered in SOLR-8541; the spatial PrefixTreeStrategy uses this 
> attribute and the CachingTokenFilter when used on the analysis chain will 
> call clearAttributes() in it's end() method and then capture the state so it 
> can be replayed later.  BytesTermAttributeImpl.clear() nulls out the bytes 
> reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Moving Lucene / Solr from SVN to Git

2016-01-14 Thread Chris Hostetter

: INFRA-11056: Migrate Lucene project from SVN to Git.
: https://issues.apache.org/jira/browse/INFRA-11056

Mark: Infra uses non standard Jira workflows, and the last infra 
action was to toggle the state to "Waiting for user" -- Dawid & Uwe 
posted followup questions for Infra in that issue, but I believe:

a) it won't get on anyones radar until it's toggled back to "WaitingforInfra"
b) only you (as the reporter) are shown the button to do that toggling



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 336 - Still Failing!

2016-01-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/336/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([CAEEF846827934FF:844D8D9593A225EF]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:175)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:857)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1413)
at 
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest(TestAuthorizationFramework.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate

[jira] [Updated] (LUCENE-6973) Improve TeeSinkTokenFilter

2016-01-14 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-6973:
---
Attachment: LUCENE-6973.patch

Patch addresses the following:

* Removes {{TokenTypeFilter}} (I found {{TypeTokenFilter}} which does the same 
and more).

* Removes {{TokenRangeFilter}}: the old one (Sink) had a bug IMO and in general 
I don't find this filter useful. It doesn't take into account other filters 
which drop tokens, so if you pass the range 3,5 it's not clear if you expect 
the original 3-5 terms, or any terms numbered 3-5. Anyway, I think it's trivial 
to implement if someone really needs such a filter, we don't have to offer it 
out-of-the-box.

* Added an {{argProducer}} to {{TestRandomChains}}.

[~thetaphi] I think it's ready. Would appreciate a final review though.

> Improve TeeSinkTokenFilter
> --
>
> Key: LUCENE-6973
> URL: https://issues.apache.org/jira/browse/LUCENE-6973
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch, 
> LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch
>
>
> {{TeeSinkTokenFilter}} can be improved in several ways, as it's written today:
> The most major one is removing {{SinkFilter}} which just doesn't work and is 
> confusing. E.g., if you set a {{SinkFilter}} which filters tokens, the 
> attributes on the stream such as {{PositionIncrementAttribute}} are not 
> updated. Also, if you update any attribute on the stream, you affect other 
> {{SinkStreams}} ... It's best if we remove this confusing class, and let 
> consumers reuse existing {{TokenFilters}} by chaining them to the sink stream.
> After we do that, we can make all the cached states a single (immutable) 
> list, which is shared between all the sink streams, so we don't need to keep 
> many references around, and also deal with {{WeakReference}}.
> Besides that there are some other minor improvements to the code that will 
> come after we clean up this class.
> From a backwards-compatibility standpoint, I don't think that {{SinkFilter}} 
> is actually used anywhere (since it just ... confusing and doesn't work as 
> expected), and therefore I believe it won't affect anyone. If however someone 
> did implement a {{SinkFilter}}, it should be trivial to convert it to a 
> {{TokenFilter}} and chain it to the {{SinkStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #1159: POMs out of sync

2016-01-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/1159/

No tests ran.

Build Log:
[...truncated 24759 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:810: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:299: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/lucene/build.xml:411: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/lucene/common-build.xml:586:
 Error deploying artifact 'org.apache.lucene:lucene-parent:pom': Error 
installing artifact's metadata: Error while deploying metadata: Error 
transferring file

Total time: 10 minutes 13 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2016-01-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101185#comment-15101185
 ] 

Mikhail Khludnev commented on SOLR-5743:


hold on.. 
I wonder why you can't intersect it with parent level filer  
{code}
q={!parent%20which=type_s:parent}COLOR_s:Blue&facet=true&child.facet.field=COLOR_s&fq=BRAND_s:Nike
{code}
in this case no copying is necessary. Make sure you checked examples from [the 
blog|http://blog.griddynamics.com/2013/09/solr-block-join-support.html]

> Faceting with BlockJoin support
> ---
>
> Key: SOLR-5743
> URL: https://issues.apache.org/jira/browse/SOLR-5743
> Project: Solr
>  Issue Type: New Feature
>  Components: faceting
>Reporter: abipc
>Assignee: Mikhail Khludnev
>  Labels: features
> Fix For: 5.5
>
> Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch
>
>
> For a sample inventory(note - nested documents) like this -   
>  
> 10
> parent
> Nike
> 
> 11
> Red
> XL
> 
> 
> 12
> Blue
> XL
> 
> 
> Faceting results must contain - 
> Red(1)
> XL(1) 
> Blue(1) 
> for a "q=*" query. 
> PS : The inventory example has been taken from this blog - 
> http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.3 - Build # 13 - Still Failing

2016-01-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.3/13/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:49638: Error CREATEing SolrCore 
'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:49638: Error CREATEing SolrCore 
'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:301)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:418)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(S

Re: live_nodes and state.json can get out of sync

2016-01-14 Thread Mark Miller
bq. As for #2, I haven't found any tickets that mention anything like that,
that may not mean much though.

I'll see if I can dig it up. Perhaps it's only been discussed and we still
need to make one, but I'm pretty sure someone did.

- Mark

On Thu, Jan 14, 2016 at 9:01 PM Erick Erickson 
wrote:

> bq: A report of a 'spotting' or two in the wild is a very weak leg for
> such a hack to stand on.
>
> Can't disagree. The more I think about it, the harder it is to see
> some process that would
> be helpful. The fact that the node (and presumably all replicas on
> that node) are unavailable
> means you can't index to any replica on that node _and_ you can't do
> regular distributed queries. About the only thing you _can_ do is
> query the (stale) replicas on
> that node with &distrib=false, which is at least a little useful when
> trying to understand the
> state of the system but totally useless when it comes to a production
> setup.
>
> I guess "monitor and if it's repeatable try to find out why it was
> being removed in the first place".
>
> As for #2, I haven't found any tickets that mention anything like
> that, that may not mean much
> though.
>
> Scott:
>
> Right, but since the node was removed from live_nodes in the first
> place, presumably the Solr
> node wasn't reachable (speculation). So it wouldn't receive an event
> that it was removed
> from the live_node ephemeral and couldn't repair itself.
>
> On Thu, Jan 14, 2016 at 5:55 PM, Scott Blum  wrote:
> > Most ephemeral node uses include a monitoring component or watch of some
> > kind tho.
> >
> > On Thu, Jan 14, 2016 at 5:54 PM, Mark Miller 
> wrote:
> >>
> >> That is just silly though. There is no reason it should be gone in a
> legit
> >> situation. We can't have everything monitoring all it's state all the
> time
> >> and trying to correct it.
> >>
> >> A report of a 'spotting' or two in the wild is a very weak leg for such
> a
> >> hack to stand on.
> >>
> >>
> >> - Mark
> >>
> >> On Thu, Jan 14, 2016 at 5:40 PM Scott Blum 
> wrote:
> >>>
> >>> For #1, I think each node should periodically ensure it's in the
> >>> live_nodes list in ZK.
> >>
> >> --
> >> - Mark
> >> about.me/markrmiller
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
- Mark
about.me/markrmiller


[jira] [Commented] (SOLR-8546) TestLazyCores is failing a lot on the Jenkins cluster.

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101098#comment-15101098
 ] 

Mark Miller commented on SOLR-8546:
---

I'll see what I can find with some beasting tomorrow.

> TestLazyCores is failing a lot on the Jenkins cluster.
> --
>
> Key: SOLR-8546
> URL: https://issues.apache.org/jira/browse/SOLR-8546
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Erick Erickson
>
> Looks like two issues:
> * A thread leak due to 3DsearcherExecutor
> * An ObjectTracker fail because a SolrCore is left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Greg Wilkins (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101076#comment-15101076
 ] 

Greg Wilkins commented on SOLR-8539:


So I'm going to close the [jetty 
issue](https://bugs.eclipse.org/bugs/show_bug.cgi?id=485794) for this as won't 
fix.

I think our current behaviour of bravely and foolishly trying to limp on is 
probably the correct one - given that OOOME handling is on throwing not 
catching.  Anyway, thanks for bringing this to our attention - was educational. 
   

Do open more jetty issues if you have any other concerns/ideas about our error 
handling or anything else.

cheers


> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.getFrame(SegmentTermsEnum.java:215)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum

[jira] [Commented] (SOLR-8546) TestLazyCores is failing a lot on the Jenkins cluster.

2016-01-14 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101064#comment-15101064
 ] 

Erick Erickson commented on SOLR-8546:
--

1,500 runs with the beasting script and no failures. I'm reluctant to make 
changes here that I can't show, you know, actually make a difference, but I 
don't see much other choice except to make stabs at it from looking at Jenkins 
logs, check code in and then wait and see if they go away.

> TestLazyCores is failing a lot on the Jenkins cluster.
> --
>
> Key: SOLR-8546
> URL: https://issues.apache.org/jira/browse/SOLR-8546
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Erick Erickson
>
> Looks like two issues:
> * A thread leak due to 3DsearcherExecutor
> * An ObjectTracker fail because a SolrCore is left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101050#comment-15101050
 ] 

Mark Miller commented on SOLR-8539:
---

Also, while I don't expect it's involved in these low memory simlulation cases, 
there is a general problem with OnOutOfMemoryError and larger heaps it seems 
(which can prevent it from working): 
https://bugs.openjdk.java.net/browse/JDK-8027434

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.getFrame(SegmentTermsEnum.java:215)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.pushFrame(SegmentTermsEnum.java:241)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(SegmentTermsEnum.java:728)
> at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(FilterLe

[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101048#comment-15101048
 ] 

Mark Miller commented on SOLR-8539:
---

bq. I even duplicated the report here and I see it working.

I should mention, the difference being, I did not use the script to start Solr. 
I did it myself with -XX:OnOutOfMemoryError='echo OOM %p'

Results in:

{noformat}
 [java] 520727 INFO  (qtp1013423070-32) [   ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system params={wt=json&_=1452822748177} status=0 
QTime=7
 [java] #
 [java] # java.lang.OutOfMemoryError: Java heap space
 [java] # -XX:OnOutOfMemoryError="echo OOM %p"
 [java] #   Executing /bin/sh -c "echo OOM 19890"...
 [java] OOM 19890
 [java] 522083 ERROR (qtp1013423070-30) [   x:my_core] o.a.s.s.HttpSolrCall 
null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
 [java] at 
org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:605)
 [java] at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:474)
 [java] at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:226)
 [java] at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
 [java] at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
{noformat}

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> or

Re: live_nodes and state.json can get out of sync

2016-01-14 Thread Erick Erickson
bq: A report of a 'spotting' or two in the wild is a very weak leg for
such a hack to stand on.

Can't disagree. The more I think about it, the harder it is to see
some process that would
be helpful. The fact that the node (and presumably all replicas on
that node) are unavailable
means you can't index to any replica on that node _and_ you can't do
regular distributed queries. About the only thing you _can_ do is
query the (stale) replicas on
that node with &distrib=false, which is at least a little useful when
trying to understand the
state of the system but totally useless when it comes to a production setup.

I guess "monitor and if it's repeatable try to find out why it was
being removed in the first place".

As for #2, I haven't found any tickets that mention anything like
that, that may not mean much
though.

Scott:

Right, but since the node was removed from live_nodes in the first
place, presumably the Solr
node wasn't reachable (speculation). So it wouldn't receive an event
that it was removed
from the live_node ephemeral and couldn't repair itself.

On Thu, Jan 14, 2016 at 5:55 PM, Scott Blum  wrote:
> Most ephemeral node uses include a monitoring component or watch of some
> kind tho.
>
> On Thu, Jan 14, 2016 at 5:54 PM, Mark Miller  wrote:
>>
>> That is just silly though. There is no reason it should be gone in a legit
>> situation. We can't have everything monitoring all it's state all the time
>> and trying to correct it.
>>
>> A report of a 'spotting' or two in the wild is a very weak leg for such a
>> hack to stand on.
>>
>>
>> - Mark
>>
>> On Thu, Jan 14, 2016 at 5:40 PM Scott Blum  wrote:
>>>
>>> For #1, I think each node should periodically ensure it's in the
>>> live_nodes list in ZK.
>>
>> --
>> - Mark
>> about.me/markrmiller
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101044#comment-15101044
 ] 

Mark Miller commented on SOLR-8539:
---

bq.  The documentation for the mechanism says: "Run user-defined commands when 
an OutOfMemoryError is first thrown.", which would suggest how the exception is 
handled is not important? But perhaps that documentation is wrong?

Previous JIRA's that have been filed let me to believe that is not what 
happens, but a little testing out by hand confirms it. When this does not work 
as expected, it must be due to another piece.

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.getFrame(SegmentTermsEnum.java:215)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.pushFrame(SegmentTermsEnum.java:241)
> at

[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101041#comment-15101041
 ] 

Mark Miller commented on SOLR-8539:
---

bq. Namely that it will "Run user-defined commands when an OutOfMemoryError is 
first thrown. (Introduced in 1.4.2 update 12, 6)"

I was doing the same and found the same thing. It doesn't matter if you catch 
it or not. I even duplicated the report here and I see it working. It must be 
the command line property or the script that it triggers that is off.

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.getFrame(SegmentTermsEnum.java:215)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.pushFrame(SegmentTermsEnum.java:241)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Segm

Re: live_nodes and state.json can get out of sync

2016-01-14 Thread Scott Blum
Most ephemeral node uses include a monitoring component or watch of some
kind tho.

On Thu, Jan 14, 2016 at 5:54 PM, Mark Miller  wrote:

> That is just silly though. There is no reason it should be gone in a legit
> situation. We can't have everything monitoring all it's state all the time
> and trying to correct it.
>
> A report of a 'spotting' or two in the wild is a very weak leg for such a
> hack to stand on.
>
>
> - Mark
>
> On Thu, Jan 14, 2016 at 5:40 PM Scott Blum  wrote:
>
>> For #1, I think each node should periodically ensure it's in the
>> live_nodes list in ZK.
>>
> --
> - Mark
> about.me/markrmiller
>


[jira] [Commented] (SOLR-5806) SolrCloud: UI link or script to remove node from clusterstate.json

2016-01-14 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101040#comment-15101040
 ] 

Erick Erickson commented on SOLR-5806:
--

Doesn't DELETEREPLICA do this already? Maybe not already when the JIRA was 
posted...

> SolrCloud: UI link or script to remove node from clusterstate.json
> --
>
> Key: SOLR-5806
> URL: https://issues.apache.org/jira/browse/SOLR-5806
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.7
>Reporter: Gregg Donovan
>Priority: Minor
>
> In cases of partial failure where a node is still connected to ZooKeeper but 
> failing -- e.g. bad disk, bad memory, etc. -- it would be nice to have a 
> quick UI link or command-line script to remove the node from 
> clusterstate.json quickly.
> We've had partial failures where we couldn't SSH into the box but the VM was 
> still running and connected to ZooKeeper. In these cases, we've had to power 
> the machine down from the 
> [ILO|http://en.wikipedia.org/wiki/HP_Integrated_Lights-Out] in order to get 
> it out of clusterstate.json. 
> Having something handier in such outages would be great. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101033#comment-15101033
 ] 

Mark Miller commented on SOLR-8131:
---

I think this made it so that you cannot use the Admin UI to add a SolrCore out 
of the box? I think that's a tough user experience.

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
>  Labels: difficulty-easy, impact-high
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8131-schemaless-fix.patch, 
> SOLR-8131-schemaless-fix.patch, SOLR-8131.patch, SOLR-8131.patch, 
> SOLR-8131.patch, SOLR-8131.patch, SOLR-8131.patch, SOLR-8131_5x.patch
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8507) Add information about database product name, product version, driver name, and driver version.

2016-01-14 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099291#comment-15099291
 ] 

Joel Bernstein edited comment on SOLR-8507 at 1/15/16 1:37 AM:
---

Let's just get the version from one node to keep it simple. Then we just need a 
few tests.


was (Author: joel.bernstein):
Let's just get the version from one node to keep it simple.

> Add information about database product name, product version, driver name, 
> and driver version.
> --
>
> Key: SOLR-8507
> URL: https://issues.apache.org/jira/browse/SOLR-8507
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8507.patch
>
>
> DBVisualizer asks for information about database product name, product 
> version, driver name, and driver version. These should be implemented in 
> DatabaseMetaDataImpl.
> 2016-01-07 13:30:10.814 FINE83 [pool-3-thread-10 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getDatabaseProductName()
> 2016-01-07 13:30:10.814 FINE83 [pool-3-thread-10 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getDatabaseProductVersion()
> 2016-01-07 13:30:10.814 FINE83 [pool-3-thread-10 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getDriverName()
> 2016-01-07 13:30:10.814 FINE83 [pool-3-thread-10 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getDriverVersion()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8507) Add information about database product name, product version, driver name, and driver version.

2016-01-14 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099291#comment-15099291
 ] 

Joel Bernstein commented on SOLR-8507:
--

Let's just get the version from one node to keep it simple.

> Add information about database product name, product version, driver name, 
> and driver version.
> --
>
> Key: SOLR-8507
> URL: https://issues.apache.org/jira/browse/SOLR-8507
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8507.patch
>
>
> DBVisualizer asks for information about database product name, product 
> version, driver name, and driver version. These should be implemented in 
> DatabaseMetaDataImpl.
> 2016-01-07 13:30:10.814 FINE83 [pool-3-thread-10 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getDatabaseProductName()
> 2016-01-07 13:30:10.814 FINE83 [pool-3-thread-10 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getDatabaseProductVersion()
> 2016-01-07 13:30:10.814 FINE83 [pool-3-thread-10 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getDriverName()
> 2016-01-07 13:30:10.814 FINE83 [pool-3-thread-10 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getDriverVersion()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8503) Implement org.apache.solr.client.solrj.io.sql.ConnectionImpl.getMetaData and getCatalog

2016-01-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-8503.

Resolution: Fixed

> Implement org.apache.solr.client.solrj.io.sql.ConnectionImpl.getMetaData and 
> getCatalog
> ---
>
> Key: SOLR-8503
> URL: https://issues.apache.org/jira/browse/SOLR-8503
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8503.patch, SOLR-8503.patch
>
>
> Product: DbVisualizer Free 9.2.14 [Build #2495]
> OS: Mac OS X
> OS Version: 10.11.2
> OS Arch: x86_64
> Java Version: 1.8.0_60
> Java VM: Java HotSpot(TM) 64-Bit Server VM
> Java Vendor: Oracle Corporation
> Java Home: /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home
> DbVis Home: /Applications/DbVisualizer.app/Contents/java/app
> User Home: /Users/risdenk
> PrefsDir: /Users/risdenk/.dbvis
> SessionId: 83
> BindDir: null
> An error occurred while establishing the connection:
> Details:
>    Type: java.lang.UnsupportedOperationException
> Stack Trace:
> java.lang.UnsupportedOperationException
>    at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.getMetaData(ConnectionImpl.java:116)
>    at com.onseven.dbvis.h.B.C.Ć(Z:2253)
>    at com.onseven.dbvis.h.B.C.ā(Z:253)
>    at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>    at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8503) Implement org.apache.solr.client.solrj.io.sql.ConnectionImpl.getMetaData and getCatalog

2016-01-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099263#comment-15099263
 ] 

ASF subversion and git services commented on SOLR-8503:
---

Commit 1724721 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1724721 ]

SOLR-8503: Implement 
org.apache.solr.client.solrj.io.sql.ConnectionImpl.getMetaData and getCatalog

> Implement org.apache.solr.client.solrj.io.sql.ConnectionImpl.getMetaData and 
> getCatalog
> ---
>
> Key: SOLR-8503
> URL: https://issues.apache.org/jira/browse/SOLR-8503
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8503.patch, SOLR-8503.patch
>
>
> Product: DbVisualizer Free 9.2.14 [Build #2495]
> OS: Mac OS X
> OS Version: 10.11.2
> OS Arch: x86_64
> Java Version: 1.8.0_60
> Java VM: Java HotSpot(TM) 64-Bit Server VM
> Java Vendor: Oracle Corporation
> Java Home: /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home
> DbVis Home: /Applications/DbVisualizer.app/Contents/java/app
> User Home: /Users/risdenk
> PrefsDir: /Users/risdenk/.dbvis
> SessionId: 83
> BindDir: null
> An error occurred while establishing the connection:
> Details:
>    Type: java.lang.UnsupportedOperationException
> Stack Trace:
> java.lang.UnsupportedOperationException
>    at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.getMetaData(ConnectionImpl.java:116)
>    at com.onseven.dbvis.h.B.C.Ć(Z:2253)
>    at com.onseven.dbvis.h.B.C.ā(Z:253)
>    at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>    at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099255#comment-15099255
 ] 

Shawn Heisey commented on SOLR-8539:


Would 9.2.13 handle that the same?  The stable branch of Solr is using that 
version of Jetty, and requires Java 7.  Our trunk branch requires Java 8.  We 
did have that branch upgraded to Jetty 9.3, but had some problems unrelated to 
this issue, so that upgrade has recently been reverted.

I wonder what might be happening with Solr that this would behave differently.  
I wonder if our script is not working right.  When I find some time I can do 
some experiments on a 5.3.2 snapshot.

Does the presence of a Filter make any difference?  Pretty much everything in 
Solr is handled through SolrDispatchFilter, which ultimately inherits from 
javax.servlet.Filter.

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTerm

[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Joakim Erdfelt (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099214#comment-15099214
 ] 

Joakim Erdfelt commented on SOLR-8539:
--

We've been testing the OOME handling on the Jetty side to see where we would 
need to make improvements.

We've discovered that the {{-XX:OnOutOfMemoryError}} works as documented in the 
JVM.

Namely that it will ??"Run user-defined commands when an OutOfMemoryError is 
first thrown. (Introduced in 1.4.2 update 12, 6)"??

We tested this in two different ways, and put the examples up on github at
https://github.com/jetty-project/jetty-oome 

h4. Technique #1: in a distribution

The project is also a valid {{jetty.base}} directory and can be utilized as one 
directly.

{code:none}
$ mvn clean install
$ java -Xmx64m -XX:OnOutOfMemoryError="kill -9 %p" -jar 
~/code/jetty/distros/jetty-distribution-9.3.6.v20151106/start.jar 
{code}

(In a different terminal, issue the http request to trigger the OOME)

{code:none}
$ curl http://localhost:88080/oome/
{code}

The output is as follows ...

{code:none}
$ java -Xmx64m -XX:OnOutOfMemoryError="kill -9 %p" -jar 
~/code/jetty/distros/jetty-distribution-9.3.6.v20151106/start.jar 
2016-01-14 17:31:33.253:INFO::main: Logging initialized @291ms
2016-01-14 17:31:33.352:WARN:oejx.XmlConfiguration:main: Property 'jetty.port' 
is deprecated, use 'jetty.http.port' instead
2016-01-14 17:31:33.393:INFO:oejs.Server:main: jetty-9.3.6.v20151106
2016-01-14 17:31:33.406:INFO:oejdp.ScanningAppProvider:main: Deployment monitor 
[file:///home/joakim/code/jetty/github-jetty-project/jetty-oome/webapps/] at 
interval 1
2016-01-14 17:31:33.520:INFO:oejw.StandardDescriptorProcessor:main: NO JSP 
Support for /oome, did not find org.eclipse.jetty.jsp.JettyJspServlet
2016-01-14 17:31:33.548:INFO:oejsh.ContextHandler:main: Started 
o.e.j.w.WebAppContext@7225790e{/oome,file:///tmp/jetty-0.0.0.0-8080-oome.war-_oome-any-5546839308576240840.dir/webapp/,AVAILABLE}{/oome.war}
2016-01-14 17:31:33.578:INFO:oejw.StandardDescriptorProcessor:main: NO JSP 
Support for /wsecho, did not find org.eclipse.jetty.jsp.JettyJspServlet
2016-01-14 17:31:33.580:INFO:oejsh.ContextHandler:main: Started 
o.e.j.w.WebAppContext@a7e666{/wsecho,file:///tmp/jetty-0.0.0.0-8080-wsecho.war-_wsecho-any-6960823368392015323.dir/webapp/,AVAILABLE}{/wsecho.war}
2016-01-14 17:31:33.593:INFO:oejs.ServerConnector:main: Started 
ServerConnector@4d95d2a2{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
2016-01-14 17:31:33.594:INFO:oejs.Server:main: Started @633ms
xzzz#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9 %p"
#   Executing /bin/sh -c "kill -9 13803"...
Killed
{code}

h4. Technique #2: directly attempting to prevent {{-XX:OnOutOfMemoryError}} 
from working

There's a simple class in 
[https://github.com/jetty-project/jetty-oome/blob/master/src/main/java/test/Ohme.java]
 that attempts to replicate the threading model in Jetty at its most distilled 
and attempt to prevent the OOME from triggering the {{-XX:OnOutOfMemoryError}} 
script.

{code:none}
$ mvn clean install
$ java -Xmx64m -XX:OnOutOfMemoryError="kill -9 %p" -cp target/classes test.Ohme
xzzz#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9 %p"
#   Executing /bin/sh -c "kill -9 14382"...
Killed
{code}

Some notes on the output seen:

The main thread will output a {{"."}} every 500ms
The executor thread will output a {{"x"}} when it enters into the runnable, and 
a {{"z"}} every time it loops through and consumes more memory, and finally a 
{{"!"}} if the Error is ever caught.

h4. Observed results

In neither scenario have we seen the {{-XX:OnOutOfMemoryError}} not execute, in 
fact we can't even demonstrate a way *to* prevent it.

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.Queue

[jira] [Updated] (SOLR-8550) Add asynchronous streams to the Streaming API

2016-01-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8550:
-
Description: 
Currently all streams in the Streaming API are synchronously *pulled* by a 
client.

It would be great to add the capability to have asyncronous streams that live 
within Solr that can *push* content as well. This would facilite large scale 
alerting and background aggregation use cases.

  was:
Currently all streams in the Streaming API are synchronously *pulled* by a 
client.

It would be great to add the capability to have Asyncronous streams that live 
within Solr that can *push* content as well. This would facilite large scale 
alerting and background aggregation use cases.


> Add asynchronous streams to the Streaming API
> -
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams in the Streaming API are synchronously *pulled* by a 
> client.
> It would be great to add the capability to have asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite large scale 
> alerting and background aggregation use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8550) Add asynchronous streams to the Streaming API

2016-01-14 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098459#comment-15098459
 ] 

Joel Bernstein edited comment on SOLR-8550 at 1/15/16 12:17 AM:


The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees an 
AsyncStream it will open it and just keep it around in a memory. The 
AsyncStream will have a thread that wakes up periodically and opens, reads, and 
closes it's underlying stream. Syntax would look like this:

{code}
async(alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facilitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.

An example of background aggregation:

{code}
parallel(async(update(rollup(...
{code}



was (Author: joel.bernstein):
The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees an 
AsyncStream it will open it and just keep it around in a memory. The 
AsyncStream will have a thread that wakes up periodically and opens, reads, and 
closes it's underlying stream. Syntax would look like this:

{code}
async(alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facilitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.



> Add asynchronous streams to the Streaming API
> -
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams in the Streaming API are synchronously *pulled* by a 
> client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite large scale 
> alerting and background aggregation use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.1 RC1

2016-01-14 Thread Michael McCandless
+1

SUCCESS! [0:31:52.038641]

Mike McCandless

http://blog.mikemccandless.com

On Thu, Jan 14, 2016 at 5:41 AM, Adrien Grand  wrote:
> Please vote for the RC1 release candidate for Lucene/Solr 5.4.1
>
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.1-RC1-rev1724447/
>
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.1-RC1-rev1724447/
>
> The smoke tester already passed for me both with the local and remote
> artifacts, so here is my +1.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8550) Add asynchronous streams to the Streaming API

2016-01-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8550:
-
Description: 
Currently all streams in the Streaming API are synchronously *pulled* by a 
client.

It would be great to add the capability to have Asyncronous streams that live 
within Solr that can *push* content as well. This would facilite large scale 
alerting and background aggregation use cases.

  was:
Currently all streams in the Streaming API are synchronously *pulled* by a 
client.

It would be great to add the capability to have Asyncronous streams that live 
within Solr that can *push* content as well. This would facilite very large 
scale alerting.


> Add asynchronous streams to the Streaming API
> -
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams in the Streaming API are synchronously *pulled* by a 
> client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite large scale 
> alerting and background aggregation use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8550) Add asynchronous streams to the Streaming API

2016-01-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8550:
-
Summary: Add asynchronous streams to the Streaming API  (was: Add 
asynchronous streams to the Streaming API to facilitate alerting)

> Add asynchronous streams to the Streaming API
> -
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams in the Streaming API are synchronously *pulled* by a 
> client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite very large 
> scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Greg Wilkins (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099172#comment-15099172
 ] 

Greg Wilkins commented on SOLR-8539:


Mark,

the suggestion made over in the jetty discussion for this is that if we see 
OOME in a few key places, then we could that to a Server.stop(Throwable), which 
would stop the server and then throw the OOME from the Server.join() method, 
which the main thread should be blocked on.

... or are you saying that it is sufficient to allow the OOME to propagate to 
any thread??   The documentation for the mechanism says: "Run user-defined 
commands when an OutOfMemoryError is first thrown.", which would suggest how 
the exception is handled is not important?  But perhaps that documentation is 
wrong?

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucene.codecs

[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Greg Wilkins (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099168#comment-15099168
 ] 

Greg Wilkins commented on SOLR-8539:



Sean,

from the jetty point of view - the server is entirely designed with the 
intention of being embedded.  Jetty is primarily a software component before it 
is a software container... it just so happens that we distribute with our 
components assembled as a standard servlet container.  You'll have noted that 
our XML configuration is just calling the java API, so it is easy to convert to 
embedded.

Embedding jetty is a very easy and sensible path to go down.

With regards to the servlet container, the vast majority of that complexity is 
optional and if you are configuring your own server you don't need to 
instantiate it.  You can easily get rid of servlet classloaders, security, 
sessions, even servlets themselves if you want to write to the jetty APIs.
So if you program to Jetty at the Server+Handler level, you get pretty much the 
same level of abstraction as offered by Netty - but with the servlet request 
API for familiarity.   We provide the same kind of async capability - but 
perhaps the servlet API is a little more clunky (but also has its good points 
as well).

In short - if servlets & webapps are seen as a burden, then don't use them.  We 
are happy to give advice as to how to not use the full servlet container.

We are also keen to participate in discussions like this on OOME handling, to 
improve our embeddability and adaptability for more use cases.





> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.se

[jira] [Comment Edited] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099157#comment-15099157
 ] 

Mark Miller edited comment on SOLR-8539 at 1/14/16 11:55 PM:
-

bq. Many libraries now spawn their own threads and thread pools.

Yes, but most libraries will *not* eat OOMExceptions, and will address if they 
are.

bq. As for exiting the VM on all VirtualMachineErrors, that seems improper too.

That is not what we are looking for. The JVM provides a specific feature to 
exit on OOMException, not any error. We are seeking the same behavior.

Trying to limp on after an OOM exception can cause nasty results in a cluster 
env.


was (Author: markrmil...@gmail.com):
bq. Many libraries now spawn their own threads and thread pools.

Yes, but most libraries will need eat OOMExceptions, and will address if they 
are.

bq. As for exiting the VM on all VirtualMachineErrors, that seems improper too.

That is not what we are looking for. The JVM provides a specific feature to 
exit on OOMException, not any error. We are seeking the same behavior.

Trying to limp on after an OOM exception can cause nasty results in a cluster 
env.

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThrea

[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099157#comment-15099157
 ] 

Mark Miller commented on SOLR-8539:
---

bq. Many libraries now spawn their own threads and thread pools.

Yes, but most libraries will need eat OOMExceptions, and will address if they 
are.

bq. As for exiting the VM on all VirtualMachineErrors, that seems improper too.

That is not what we are looking for. The JVM provides a specific feature to 
exit on OOMException, not any error. We are seeking the same behavior.

Trying to limp on after an OOM exception can cause nasty results in a cluster 
env.

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.getFrame(SegmentTermsEnum.java:215)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.pushFrame(Se

[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099139#comment-15099139
 ] 

Shawn Heisey commented on SOLR-8539:


This whole comment is a tangent, so skip or read accordingly.

One large goal we've got for Solr is to make it a standalone program.  One way 
to do that would be to embed Jetty and handle its configuration completely in 
our own code, from one of our own config files.  This is likely the easiest 
path, because it would involve very little change to existing code.  The 
significant changes would likely be new classes.

Another option, which would probably involve significant rewrites of some 
classes, is to switch to a lower-level framework like Netty.  Solr doesn't 
really need a lot of the functionality that a full servlet container provides, 
and Netty's claims look inviting.

Going far into left field, another consideration we have is HTTP/2 support.  
Jetty has HTTP/2 already with 9.3, which requires Java 8.  In our stable 
versions we are on Java 7 and Jetty 9.2, but 6.0 will require Java 8.  
HttpClient (the library used by SolrJ) does not support HTTP/2, and support is 
a long way off.  JettyClient has been mentioned as a possible replacement on 
SOLR-7442.  Netty also has HTTP/2 support in the client and the server, but I 
wouldn't want to actually switch to Netty unless there are demonstrable 
benefits in features (besides HTTP/2), performance, and/or ease of development.


> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnect

[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Joakim Erdfelt (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099114#comment-15099114
 ] 

Joakim Erdfelt commented on SOLR-8539:
--

If the Throwable occurs outside of a Jetty thread, we cannot process this down 
to the JVM for you.

Many libraries now spawn their own threads and thread pools.
There are also valid libraries that use OOME themselves to downgrade 
performance when under too much load (most common are image and video 
processing libraries)

As for exiting the VM on _all_ VirtualMachineErrors, that seems improper too.

java.lang.VirtualMachineError
  - java.lang.InternalError 
  - java.lang.UnknownError
  - java.lang.StackOverflowError  <-- this one is very easy to recover from, we 
shouldn't exit JVM because of this one.
  - java.lang.OutOfMemoryError

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)

Re: live_nodes and state.json can get out of sync

2016-01-14 Thread Mark Miller
That is just silly though. There is no reason it should be gone in a legit
situation. We can't have everything monitoring all it's state all the time
and trying to correct it.

A report of a 'spotting' or two in the wild is a very weak leg for such a
hack to stand on.


- Mark

On Thu, Jan 14, 2016 at 5:40 PM Scott Blum  wrote:

> For #1, I think each node should periodically ensure it's in the
> live_nodes list in ZK.
>
-- 
- Mark
about.me/markrmiller


[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Greg Wilkins (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099075#comment-15099075
 ] 

Greg Wilkins commented on SOLR-8539:


Unfortunately spec requires us to handle Errors - also there are good lifecycle 
reasons for doing so.

While I think OOME and the like are more often than not fatal, there is still 
some value in attempting to return a 500 and reasonable prospects (if the 
attempted allocation was large, many smaller ones might succeed and jetty does 
not need much memory).   In fact SOLR-8453 is about how the server does 
send an error response when confronted with an application exception, so it is 
hard to say generally that we should not attempt normal error handling (which 
may include logging, notification, System.exit etc.).

Thus as a container, I think we are going to have to attempt to handle 
VirtualMachineErrors in as much as we call whatever pluggable APIs given 
(onError, error page dispatch) to notify of the exception.   If we suffer 
another error while attempting to do that, it can propagate back to the 
thread/JVM.

Note also that Netty has a slightly easier job in regards to this, as it does 
not have to deal with the complexities of the servlet API - both synchronous 
and asynchronous.

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool

Re: live_nodes and state.json can get out of sync

2016-01-14 Thread Scott Blum
For #1, I think each node should periodically ensure it's in the live_nodes
list in ZK.


Re: live_nodes and state.json can get out of sync

2016-01-14 Thread Mark Miller
Both sound like kind of hacky workarounds to bugs to me.

For #1, we should not add api's to fix live nodes. I'd try to reproduce -
manually removing the live node is no real help. We need this to work
though, not an api to try and patch a possible bug.

For #2, I think there is an open issue about taking a replica offline for
various reasons. Perhaps that could be used. That should probably disable
any new recovery attempts as well as making the replica inactive in
clusterstate until the replica is put back online.

- Mark

On Thu, Jan 14, 2016 at 4:23 PM Erick Erickson 
wrote:

> We've seen at least two cases "in the wild" where a Solr node is in
> fine shape, but live_nodes does NOT list a Solr node and the
> corresponding state.json for that node shows it as "active".
>
> Furthermore, sending queries directly to the core on the machine in
> question with distrib=false generates a correct response so Solr is
> indeed "live". AFAIK, there's no way to get that Solr node _back_ into
> live_nodes without bouncing the server, but that can be disruptive.
>
> I've reproduced this situation locally and can confirm that the
> live_nodes entry never comes back. To reproduce it though, I had to:
> 1> create a collection
> 2> nuke the live_nodes ZNode
>
> Unfortunately, we don't know how to reproduce the original _real_
> condition that cause this in the first place
>
> Other than manually editing the znode, is there any other way to
> reinsert the node in live_nodes? If not, what do people think about a
> Collections API that did this? I'm thinking of a command that would
> fail unless it was sent to the node that was re-inserting itself. That
> way if the node was truly down it couldn't get re-inserted
> inappropriately.
>
> Or Solr nodes could periodically query Zookeeper to see if they were
> appropriately in live_nodes, but that seems like a lot of work for
> something that's apparently _very_ rare. I'm also not sure the node in
> question is receiving events from ZK, so I don't think even watching
> it's own node is a foolproof way of a Solr node being taken out of the
> live_nodes inappropriately being able to re-insert itself.
>
>
> ***
> Second issue. In extremely heavy indexing situations, replicas will
> never catch up to a leader if for some reason they go into recovery.
> Of course if all the replicas for a shard go down, everything grinds
> to a halt.
>
> What do people think about an option to essentially toggle whether
> recoveries are even attempted? Yet another Collections API perhaps,
> DISABLERECOVERY=true|false. The case in point is a situation that
> indexes over 1M docs/second. Or maybe this is a property on the
> collection in ZK that you could change with MODIFYCOLLECTION and
> specify on CREATE. Actually, I like this latter a lot better than
> proliferating another API action.
>
> Yes, that puts data integrity at risk since eventually you get to a
> leader-only shard. But that's already at risk since the replicas
> demonstrably never catch up.
>
> Of course the default state would be to always do the recovery as we
> do now. For installations that saw this periodically happen, they
> could change the option during an indexing lull, allow recovery then
> change the property back.
>
> Not entirely sure what I think of the idea at all, but again this is
> something we're seeing in the wild.
>
> I'll raise JIRAs unless the ideas get shot down.
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
- Mark
about.me/markrmiller


[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099005#comment-15099005
 ] 

Mark Miller commented on SOLR-8539:
---

bq. I wonder how Netty fares on this particular problem.

No clue, and like Greg says, it's hard to promise in any code base, but I've 
seen the OOM killer script actually get invoked on Tomcat, so it doesn't appear 
to be consistently swallowing OOMs.

bq.  you probably need to look at adding your own ErrorPageErrorHandler

Thanks for the tip. Def worth exploring.

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.getFrame(SegmentTermsEnum.java:215)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.pushFrame(SegmentTermsEnum.java:241)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seek

[JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk-9-ea+95) - Build # 15258 - Failure!

2016-01-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15258/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseConcMarkSweepGC -XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=5153, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=5150, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=5154, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=5152, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=5151, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=5153, name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 
java.util.concurrent.ScheduledThreadP

[jira] [Created] (SOLR-8552) Unbalanced quotes in bin/solr when -D arguments are passed via the -a switch

2016-01-14 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-8552:
---

 Summary: Unbalanced quotes in bin/solr when -D arguments are 
passed via the -a switch
 Key: SOLR-8552
 URL: https://issues.apache.org/jira/browse/SOLR-8552
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Reporter: Shalin Shekhar Mangar
Priority: Minor


This works:
{code}
bin/solr start -p 8983 -h localhost -m 2g -e schemaless 
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port= 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false
{code}
but this does not:
{code}
bin/solr start -p 8983 -h localhost -m 2g -e schemaless -a 
"-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port= 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false"
{code}

The output for the latter is:
{code}
Solr home directory 
/home/shalin/temp/bench/solr/wiki-4k-schema/example/schemaless/solr already 
exists.

Starting up Solr on port 8983 using command:
bin/solr start -p 8983 -s "example/schemaless/solr" -m 2g 
-Dcom.sun.management.jmxremote.port= 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false" -a "-Dcom.sun.management.jmxremote"


ERROR: Unbalanced quotes in bin/solr start -p 8983 -s "example/schemaless/solr" 
-m 2g -Dcom.sun.management.jmxremote.port= 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false" -a "-Dcom.sun.management.jmxremote"
{code}

I know bin/solr supports direct pass through of -D properties but it should 
still work with -a option because that is how many people would have configured 
-D properties before support for the pass-through was added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8496) Facet search count numbers are falsified by older document versions

2016-01-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098922#comment-15098922
 ] 

Hoss Man commented on SOLR-8496:


Also: does this reproduce for you when indexing from scratch, or is this an 
index you originally built with an older version of Solr and then upgraded to 
5.4? (trying to figure out if there are older segments and maybe the bug is 
specific to 5.4 reading deleted docs from those older segments)

can you also run CheckIndex (command line) and provide all of that output?

> Facet search count numbers are falsified by older document versions
> ---
>
> Key: SOLR-8496
> URL: https://issues.apache.org/jira/browse/SOLR-8496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
> Environment: Linux 3.16.0-4-amd64 x86_64 Debian 8.2
> openjdk-7-jre-headless:amd64   version 7u91-2.6.3-1~deb8u1
> solr-5.4.0, extracted from official tar
> Default solr settings from install script:SOLR_HEAP="512m"
> GC_LOG_OPTS="-verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails \
> -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution 
> -XX:+PrintGCApplicationStoppedTime"
> GC_TUNE="-XX:NewRatio=3 \
> -XX:SurvivorRatio=4 \
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+UseParNewGC \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:+CMSScavengeBeforeRemark \
> -XX:PretenureSizeThreshold=64m \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=50 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled \
> -XX:+ParallelRefProcEnabled"
> SOLR_OPTS="$SOLR_OPTS -Xss256k"
>Reporter: Andreas Müller
>
> Our setup is based on multiple cores. In One core we have a multi-filed with 
> integer values. and some other unimportant fields. We're using multi-faceting 
> for this field.
> We're querying a test scenario with:
> {code}
> http://localhost:8983/solr/core-name/select?q=dummyask: (true) AND 
> manufacturer: false AND id: (15039 16882 10850 
> 20781)&fq={!tag=professions}professions: 
> (59)&fl=id&wt=json&indent=true&facet=true&facet.field={!ex=professions}professions
> {code}
> - Query: (numDocs:48545, maxDoc:48545)
> {code:xml}
> 
> 
> 0
> 1
> 
> 
> 
> 10850
> 
> 
> 16882
> 
> 
> 15039
> 
> 
> 20781
> 
> 
> 
> 
> 
> 
> 4
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> - Then we update one document and change some fields (numDocs:48545, 
> maxDoc:48546) *The number of maxDocs is increased*
> {code:xml}
> 
> 
> 0
> 1
> 
> 
> 
> 10850
> 
> 
> 16882
> 
> 
> 15039
> 
> 
> 20781
> 
> 
> 
> 
> 
> 
> 5
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> *The Problem:*
> In the first query, we're getting a facet count of 4, which is correct. After 
> updating one document, we're getting 5 as a result wich is not correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-2798.

   Resolution: Fixed
Fix Version/s: Trunk
   5.5

I think that wraps it up -- thanks again for your patience and persaverience 
Demian.

> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-2798.patch
>
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098907#comment-15098907
 ] 

ASF subversion and git services commented on SOLR-2798:
---

Commit 1724699 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1724699 ]

SOLR-2798: remove deprecated methods from trunk

> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-2798.patch
>
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



live_nodes and state.json can get out of sync

2016-01-14 Thread Erick Erickson
We've seen at least two cases "in the wild" where a Solr node is in
fine shape, but live_nodes does NOT list a Solr node and the
corresponding state.json for that node shows it as "active".

Furthermore, sending queries directly to the core on the machine in
question with distrib=false generates a correct response so Solr is
indeed "live". AFAIK, there's no way to get that Solr node _back_ into
live_nodes without bouncing the server, but that can be disruptive.

I've reproduced this situation locally and can confirm that the
live_nodes entry never comes back. To reproduce it though, I had to:
1> create a collection
2> nuke the live_nodes ZNode

Unfortunately, we don't know how to reproduce the original _real_
condition that cause this in the first place

Other than manually editing the znode, is there any other way to
reinsert the node in live_nodes? If not, what do people think about a
Collections API that did this? I'm thinking of a command that would
fail unless it was sent to the node that was re-inserting itself. That
way if the node was truly down it couldn't get re-inserted
inappropriately.

Or Solr nodes could periodically query Zookeeper to see if they were
appropriately in live_nodes, but that seems like a lot of work for
something that's apparently _very_ rare. I'm also not sure the node in
question is receiving events from ZK, so I don't think even watching
it's own node is a foolproof way of a Solr node being taken out of the
live_nodes inappropriately being able to re-insert itself.


***
Second issue. In extremely heavy indexing situations, replicas will
never catch up to a leader if for some reason they go into recovery.
Of course if all the replicas for a shard go down, everything grinds
to a halt.

What do people think about an option to essentially toggle whether
recoveries are even attempted? Yet another Collections API perhaps,
DISABLERECOVERY=true|false. The case in point is a situation that
indexes over 1M docs/second. Or maybe this is a property on the
collection in ZK that you could change with MODIFYCOLLECTION and
specify on CREATE. Actually, I like this latter a lot better than
proliferating another API action.

Yes, that puts data integrity at risk since eventually you get to a
leader-only shard. But that's already at risk since the replicas
demonstrably never catch up.

Of course the default state would be to always do the recovery as we
do now. For installations that saw this periodically happen, they
could change the option during an indexing lull, allow recovery then
change the property back.

Not entirely sure what I think of the idea at all, but again this is
something we're seeing in the wild.

I'll raise JIRAs unless the ideas get shot down.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.4-Linux (64bit/jdk-9-ea+95) - Build # 377 - Failure!

2016-01-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/377/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testIndexingWithTikaEntityProcessor

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([E81573B2C9D7BEE4:B5C97227E92A3056]:0)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:146)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:159)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:417)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:481)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:186)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.util.TestHarness.query(TestHarness.java:311)
at 
org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.runFullImport(AbstractDataImportHandlerTestCase.java:87)
at 
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testIndexingWithTikaEntityProcessor(TestTikaEntityProcessor.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOver

[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2016-01-14 Thread Vijay Sekhri (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098859#comment-15098859
 ] 

Vijay Sekhri commented on SOLR-5743:


Hi Mikhail, Dr. Oleg
The requirement to use this feature is to have ToParentBlockJoinQuery like  
q={!parent which=}. 
To use the ParentBlockJoinQuery it needs to search on fields present in child 
document. In real world your parent document would have most of the common 
fields and child document would have only the different fields. For example 
just like BRAND_s, there will be fields like description_s, name_s, title_s, 
partnumber_s, etc. in the parent document only. As they are same for all the 
child documents , one would not repeat them in the child document, rather only 
keep them in the parent document only. In the child document , we would have 
attributes like COLOR_s, SIZE_s as the differ. 

Now for any real searches , one would search for fields like BRAND_s, 
description_s, name_s, title_s, partnumber_s, etc to return appropriate 
documents.  However , those fields are only present in parent docs. 

So searching them like q={!parent 
which=type_s:parent}BRAND_s:Nike&facet=true&child.facet.field=COLOR_s does not 
work because search on BRAND_s:Nike is present in parent document .  It gives 
this error also
child query must only match non-parent docs, but parent docID=2 matched 
childScorer=class org.apache.lucene.search.TermScorer

One could search on fields from child like this without any problem.
q={!parent%20which=type_s:parent}COLOR_s:Blue&facet=true&child.facet.field=COLOR_s

To use this feature do we have to copy all the common fields ( and thousands of 
such fields alike ) back into the child (repeating them for every child) and 
search on those fields ? For example copying brand_s field like this 

[{
 "id": 10,
 "type_s": "parent",
 "BRAND_s": "Nike",
 "_childDocuments_": [{
   "id": 11,
   "COLOR_s": "Red",
   "SIZE_s": "XL",
   "BRAND_s": "Nike",
 }, 
 {
 "id": 12,
 "COLOR_s": "Blue",
 "SIZE_s": "XL",
 "BRAND_s": "Nike",
 }]
}]

This way the query works 
q={!parent which=type_s:parent}BRAND_s:Nike&facet=true&child.facet.field=COLOR_s


Or there is some other way where we can still use the facets on the child 
fields (SIZE_s) ,  aggregate the counts on the parent docs (id:10) and still 
search on the common fields from parent docs (BRAND_s) ?


> Faceting with BlockJoin support
> ---
>
> Key: SOLR-5743
> URL: https://issues.apache.org/jira/browse/SOLR-5743
> Project: Solr
>  Issue Type: New Feature
>  Components: faceting
>Reporter: abipc
>Assignee: Mikhail Khludnev
>  Labels: features
> Fix For: 5.5
>
> Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch
>
>
> For a sample inventory(note - nested documents) like this -   
>  
> 10
> parent
> Nike
> 
> 11
> Red
> XL
> 
> 
> 12
> Blue
> XL
> 
> 
> Faceting results must contain - 
> Red(1)
> XL(1) 
> Blue(1) 
> for a "q=*" query. 
> PS : The inventory example has been taken from this blog - 
> http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098818#comment-15098818
 ] 

ASF subversion and git services commented on SOLR-2798:
---

Commit 1724686 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1724686 ]

SOLR-2798: Fixed local params to work correctly with multivalued params (merge 
r1724679)

> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-2798.patch
>
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8551) Make collection deletion more robust.

2016-01-14 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8551:
--
Attachment: SOLR-8551.patch

Working towards some improvements.

One of the cases I'd like to fix is that sometimes you can get a fail of core 
already unloaded. Since that is what we want, we should not fail in that case.

I've added some support for getting exception class names for remote exceptions 
- I'd like to use that with a new NonExistentCore SolrException so that we can 
specifically ignore core already unloaded exceptions when trying to unload a 
core when deleting a collection.

> Make collection deletion more robust.
> -
>
> Key: SOLR-8551
> URL: https://issues.apache.org/jira/browse/SOLR-8551
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8551.patch
>
>
> We need to harden collection deletion so that it's more difficult to end up 
> in partial states or receive unhelpful errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8551) Make collection deletion more robust.

2016-01-14 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8551:
-

 Summary: Make collection deletion more robust.
 Key: SOLR-8551
 URL: https://issues.apache.org/jira/browse/SOLR-8551
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller


We need to harden collection deletion so that it's more difficult to end up in 
partial states or receive unhelpful errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2016-01-14 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098800#comment-15098800
 ] 

Michael Sun commented on SOLR-8416:
---

It's great you have been working on injection. I can use it as an example. 
Thanks [~markrmil...@gmail.com] for suggestion.


> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6973) Improve TeeSinkTokenFilter

2016-01-14 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098798#comment-15098798
 ] 

Shai Erera commented on LUCENE-6973:


I ran the tests and {{TestRandomChains}} fails with this:

{noformat}
   [junit4]   2> NOTE: Windows 7 6.1 amd64/Oracle Corporation 1.8.0_40 
(64-bit)/cpus=8,threads=1,free=393306544,total=510656512
   [junit4]   2> NOTE: All tests run in this JVM: [TestRandomChains]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
-Dtests.seed=5FF882C20C905C54 -Dtests.slow=true -Dtests.locale=cs 
-Dtests.timezone=America/Buenos_Aires -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.00s | TestRandomChains (suite) <<<
   [junit4]> Throwable #1: java.lang.AssertionError: public 
org.apache.lucene.analysis.miscellaneous.DateRecognizerFilter(org.apache.lucene.analysis.TokenStream,java.text.DateFormat)
 has unsupported parameter types
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5FF882C20C905C54]:0)
   [junit4]>at 
org.apache.lucene.analysis.core.TestRandomChains.beforeClass(TestRandomChains.java:233)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{noformat}

I tracked it to {{argProducers}} not having DataFormat defined. Is it OK to add 
it?

> Improve TeeSinkTokenFilter
> --
>
> Key: LUCENE-6973
> URL: https://issues.apache.org/jira/browse/LUCENE-6973
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch, 
> LUCENE-6973.patch, LUCENE-6973.patch
>
>
> {{TeeSinkTokenFilter}} can be improved in several ways, as it's written today:
> The most major one is removing {{SinkFilter}} which just doesn't work and is 
> confusing. E.g., if you set a {{SinkFilter}} which filters tokens, the 
> attributes on the stream such as {{PositionIncrementAttribute}} are not 
> updated. Also, if you update any attribute on the stream, you affect other 
> {{SinkStreams}} ... It's best if we remove this confusing class, and let 
> consumers reuse existing {{TokenFilters}} by chaining them to the sink stream.
> After we do that, we can make all the cached states a single (immutable) 
> list, which is shared between all the sink streams, so we don't need to keep 
> many references around, and also deal with {{WeakReference}}.
> Besides that there are some other minor improvements to the code that will 
> come after we clean up this class.
> From a backwards-compatibility standpoint, I don't think that {{SinkFilter}} 
> is actually used anywhere (since it just ... confusing and doesn't work as 
> expected), and therefore I believe it won't affect anyone. If however someone 
> did implement a {{SinkFilter}}, it should be trivial to convert it to a 
> {{TokenFilter}} and chain it to the {{SinkStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8496) Facet search count numbers are falsified by older document versions

2016-01-14 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098779#comment-15098779
 ] 

Erick Erickson commented on SOLR-8496:
--

I couldn't reproduce with a test case either in a JUnit test (non Cloud, one 
core).


> Facet search count numbers are falsified by older document versions
> ---
>
> Key: SOLR-8496
> URL: https://issues.apache.org/jira/browse/SOLR-8496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
> Environment: Linux 3.16.0-4-amd64 x86_64 Debian 8.2
> openjdk-7-jre-headless:amd64   version 7u91-2.6.3-1~deb8u1
> solr-5.4.0, extracted from official tar
> Default solr settings from install script:SOLR_HEAP="512m"
> GC_LOG_OPTS="-verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails \
> -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution 
> -XX:+PrintGCApplicationStoppedTime"
> GC_TUNE="-XX:NewRatio=3 \
> -XX:SurvivorRatio=4 \
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+UseParNewGC \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:+CMSScavengeBeforeRemark \
> -XX:PretenureSizeThreshold=64m \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=50 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled \
> -XX:+ParallelRefProcEnabled"
> SOLR_OPTS="$SOLR_OPTS -Xss256k"
>Reporter: Andreas Müller
>
> Our setup is based on multiple cores. In One core we have a multi-filed with 
> integer values. and some other unimportant fields. We're using multi-faceting 
> for this field.
> We're querying a test scenario with:
> {code}
> http://localhost:8983/solr/core-name/select?q=dummyask: (true) AND 
> manufacturer: false AND id: (15039 16882 10850 
> 20781)&fq={!tag=professions}professions: 
> (59)&fl=id&wt=json&indent=true&facet=true&facet.field={!ex=professions}professions
> {code}
> - Query: (numDocs:48545, maxDoc:48545)
> {code:xml}
> 
> 
> 0
> 1
> 
> 
> 
> 10850
> 
> 
> 16882
> 
> 
> 15039
> 
> 
> 20781
> 
> 
> 
> 
> 
> 
> 4
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> - Then we update one document and change some fields (numDocs:48545, 
> maxDoc:48546) *The number of maxDocs is increased*
> {code:xml}
> 
> 
> 0
> 1
> 
> 
> 
> 10850
> 
> 
> 16882
> 
> 
> 15039
> 
> 
> 20781
> 
> 
> 
> 
> 
> 
> 5
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> *The Problem:*
> In the first query, we're getting a facet count of 4, which is correct. After 
> updating one document, we're getting 5 as a result wich is not correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-2798:
--

Assignee: Hoss Man  (was: Anshum Gupta)

> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-2798.patch
>
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 335 - Failure!

2016-01-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/335/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([5A57E99FDAD5D745:D203D6457429BABD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:175)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:857)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuite

[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-01-14 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098769#comment-15098769
 ] 

Ramkumar Aiyengar commented on SOLR-8539:
-

I don't think there is a sensible way to "limp along"? Even if you did so and 
returned 5xx errors, you aren't doing any favours, without any other special 
handling, you pretty much quite probably would just be returning that over and 
over again. Or worse, find weird bugs in application code which potentially do 
more damage than a simple exit. With an exit due to an exception, at least any 
decent process manager would respawn it, or a load balancer reroute requests 
elsewhere.. I would suggest not handling Errors in general, unless there are 
specific Errors you can whitelist as safe to trap..

> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucen

Re: [CI] Lucene 5x Linux 64 Test Only - Build # 78934 - Failure!

2016-01-14 Thread Nicholas Knize
Looks like this one is related to
https://issues.apache.org/jira/browse/LUCENE-6951. GeoPonitField benefits
but BKD is angry since the split algorithm is so different. I'll fix.

On Thu, Jan 14, 2016 at 11:54 AM, Michael McCandless 
wrote:

> I suspect this is a similar failure to my last comment on
> https://issues.apache.org/jira/browse/LUCENE-6956 again, a corner case in
> the sandbox geo polygon APIs.
>
> Nick can you confirm?
>
> Is is possible there were Lucene sandbox geo bug fixes from trunk that we
> didn't backport to 5.x?
>
> Mike McCandless
>
> On Thu, Jan 14, 2016 at 6:06 AM,  wrote:
>
>> *BUILD FAILURE*
>> Build URL
>> http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/78934/
>> Project:lucene_linux_java8_64_test_only Randomization: 
>> JDKEA8,local,heap[979m],-server
>> +UseSerialGC +UseCompressedOops +AggressiveOpts,sec manager on Date of
>> build:Thu, 14 Jan 2016 11:57:40 +0100 Build duration:8 min 42 sec
>> *CHANGES* No Changes
>> *BUILD ARTIFACTS*
>> -
>> checkout/lucene/build/sandbox/test/temp/junit4-J0-20160114_120557_901.events
>> 
>> -
>> checkout/lucene/build/sandbox/test/temp/junit4-J1-20160114_120557_901.events
>> 
>> -
>> checkout/lucene/build/sandbox/test/temp/junit4-J2-20160114_120557_901.events
>> 
>> -
>> checkout/lucene/build/sandbox/test/temp/junit4-J3-20160114_120557_901.events
>> 
>> -
>> checkout/lucene/build/sandbox/test/temp/junit4-J4-20160114_120557_901.events
>> 
>> -
>> checkout/lucene/build/sandbox/test/temp/junit4-J5-20160114_120557_902.events
>> 
>> -
>> checkout/lucene/build/sandbox/test/temp/junit4-J6-20160114_120557_902.events
>> 
>> -
>> checkout/lucene/build/sandbox/test/temp/junit4-J7-20160114_120557_902.events
>> 
>> *FAILED JUNIT TESTS* Name: org.apache.lucene.bkdtree Failed: 1 test(s),
>> Passed: 4 test(s), Skipped: 5 test(s), Total: 10 test(s)
>> *- Failed: org.apache.lucene.bkdtree.TestBKDTree.testRandomMedium *
>> *CONSOLE OUTPUT* [...truncated 10878 lines...] [junit4] [junit4] [junit4]
>> JVM J0: 0.54 .. 8.15 = 7.62s [junit4] JVM J1: 0.79 .. 9.33 = 8.54s [junit4]
>> JVM J2: 0.54 .. 12.59 = 12.06s [junit4] JVM J3: 0.29 .. 9.32 = 9.03s [junit4]
>> JVM J4: 0.54 .. 12.10 = 11.56s [junit4] JVM J5: 0.29 .. 15.10 = 14.82s 
>> [junit4]
>> JVM J6: 0.54 .. 9.09 = 8.55s [junit4] JVM J7: 0.54 .. 14.10 = 13.56s [junit4]
>> Execution time total: 15 seconds [junit4] Tests summary: 20 suites, 142
>> tests, 1 error, 11 ignored (11 assumptions) BUILD FAILED 
>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:472:
>> The following error occurred while executing this line: 
>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:2240:
>> The following error occurred while executing this line: 
>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/module-build.xml:58:
>> The following error occurred while executing this line: 
>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1444:
>> The following error occurred while executing this line: 
>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1000:
>> There were test failures: 20 suites, 142 tests, 1 error, 11 ignored (11
>> assumptions) [seed: 215EF7D7DC231C40] Total time: 8 minutes 21 seconds Build
>> step 'Invoke Ant' marked build as failure Archiving artifacts Recording
>> test results [description-setter] Description set:
>> JDKEA8,local,heap[979m],-server +UseSerialGC +UseCompressedOops
>> +AggressiveOpts,sec manager on Email was triggered for: Failure - 1st Trigger
>> Failure - Any was overridden by another trigger and will not send an email. 
>> Trigger
>> Failure - Still was overri

[jira] [Commented] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098746#comment-15098746
 ] 

ASF subversion and git services commented on SOLR-2798:
---

Commit 1724679 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1724679 ]

SOLR-2798: Fixed local params to work correctly with multivalued params

> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Anshum Gupta
> Attachments: SOLR-2798.patch
>
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-01-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098702#comment-15098702
 ] 

Shawn Heisey commented on SOLR-8542:


I personally use 80 columns for files like README.txt, but from other people's 
additions to CHANGES.txt, I know that others are using more.  I am frequently 
viewing text files like this in ssh or on terminals, so I find lines longer 
than 80 characters to be annoying.  For source code, I edit in an IDE more 
often than with vi, so longer lines are not really a problem there.

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: README.md, SOLR-8542-branch_5x.patch, 
> SOLR-8542-trunk.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously presented by the authors at Lucene/Solr 
> Revolution 2015 ( 
> http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp
>  ).
> The attached code was jointly worked on by Joshua Pantony, Michael Nilsson, 
> and Diego Ceccarelli.
> Any chance this could make it into a 5x release? We've also attached 
> documentation as a github MD file, but are happy to convert to a desired 
> format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-2798:
---
Attachment: SOLR-2798.patch


Here's Demian's PR as a unified patch with a few more tweaks based on my 
review...

* deleted a really old/obsolte "note to self" Demian had acidently migrated 
somewhere it made even less sense
* added a few more asserts to QueryEqualityTest & DisMaxRequestHandlerTest for 
good measure
* noticed that QueryParsing.getLocalParams was still delegating to the old Map 
version
** added asserts to SimpleFacetsTest (using multiple facet.range.other 
localparams) to demonstrate how that still causes the bug in some cases
** fixed QueryParsing.getLocalParams so SimpleFacetsTest started passing again


Tests all pass. 

Still running precommit & then i'll move forwrad with trunk & start backporting

> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Anshum Gupta
> Attachments: SOLR-2798.patch
>
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-01-14 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098670#comment-15098670
 ] 

Christine Poerschke commented on SOLR-8542:
---

Hello Joshua, Michael and Diego. Thanks for your patch for this new feature.

Just to say that i've started taking a look at yesterday's 
SOLR-8542-trunk.patch and have three simple observations so far, in no 
particular order:
* Many of the lines in {{solr/contrib/ltr/README.txt}} are very long. Having 
said that, I do not know what the recommended maximum line length for README 
files is and am perhaps just using the wrong browser or editor to read.
* Binary diff for {{solr/contrib/ltr/test-lib/jcl-over-slf4j-1.7.7.jar}} seems 
to form part of the patch, unintentionally so probably.
* Running {{ant validate}} after applying to patch locally points out 'tabs 
instead of spaces' and 'invalid logging pattern' for some of the files.

(The https://en.wikipedia.org/wiki/Learning_to_rank page mentioned in the 
README.txt for reading up on learning to rank will be my commute reading.)

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: README.md, SOLR-8542-branch_5x.patch, 
> SOLR-8542-trunk.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously presented by the authors at Lucene/Solr 
> Revolution 2015 ( 
> http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp
>  ).
> The attached code was jointly worked on by Joshua Pantony, Michael Nilsson, 
> and Diego Ceccarelli.
> Any chance this could make it into a 5x release? We've also attached 
> documentation as a github MD file, but are happy to convert to a desired 
> format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2963 - Failure!

2016-01-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2963/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.test

Error Message:
QUERY FAILED: 
xpath=/response/arr[@name='fields']/lst/str[@name='name'][.='newTestFieldInt447']
  request=/schema/fields?wt=xml  response=  0   48 
 _root_ string 
false true 
true   _version_ long true true   
constantField tdouble   
id string  
   false true   
  true true 
true   newTestFieldInt0 tlong  
 newTestFieldInt1 tlong   newTestFieldInt10 tlong  
 newTestFieldInt100 tlong   newTestFieldInt101 tlong 
  newTestFieldInt102 tlong   newTestFieldInt103 tlong 
  newTestFieldInt104 tlong   newTestFieldInt105 tlong 
  newTestFieldInt106 tlong   newTestFieldInt107 tlong 
  newTestFieldInt108 tlong   newTestFieldInt109 tlong 
  newTestFieldInt11 tlong   newTestFieldInt110 tlong 
  newTestFieldInt111 tlong   newTestFieldInt112 tlong 
  newTestFieldInt113 tlong   newTestFieldInt114 tlong 
  newTestFieldInt115 tlong   newTestFieldInt116 tlong 
  newTestFieldInt117 tlong   newTestFieldInt118 tlong 
  newTestFieldInt119 tlong   newTestFieldInt12 tlong  
 newTestFieldInt120 tlong   newTestFieldInt121 tlong 
  newTestFieldInt122 tlong   newTestFieldInt123 tlong 
  newTestFieldInt124 tlong   newTestFieldInt125 tlong 
  newTestFieldInt126 tlong   newTestFieldInt127 tlong 
  newTestFieldInt128 tlong   newTestFieldInt129 tlong 
  newTestFieldInt13 tlong   newTestFieldInt130 tlong 
  newTestFieldInt131 tlong   newTestFieldInt132 tlong 
  newTestFieldInt133 tlong   newTestFieldInt134 tlong 
  newTestFieldInt135 tlong   newTestFieldInt136 tlong 
  newTestFieldInt137 tlong   newTestFieldInt138 tlong 
  newTestFieldInt139 tlong   newTestFieldInt14 tlong  
 newTestFieldInt140 tlong   newTestFieldInt141 tlong 
  newTestFieldInt142 tlong   newTestFieldInt143 tlong 
  newTestFieldInt144 tlong   newTestFieldInt145 tlong 
  newTestFieldInt146 tlong   newTestFieldInt147 tlong 
  newTestFieldInt148 tlong   newTestFieldInt149 tlong 
  newTestFieldInt15 tlong   newTestFieldInt150 tlong 
  newTestFieldInt151 tlong   newTestFieldInt152 tlong 
  newTestFieldInt153 tlong   newTestFieldInt154 tlong 
  newTestFieldInt155 tlong   newTestFieldInt156 tlong 
  newTestFieldInt157 tlong   newTestFieldInt158 tlong 
  newTestFieldInt159 tlong   newTestFieldInt16 tlong  
 newTestFieldInt160 tlong   newTestFieldInt161 tlong 
  newTestFieldInt162 tlong   newTestFieldInt163 tlong 
  newTestFieldInt164 tlong   newTestFieldInt165 tlong 
  newTestFieldInt166 tlong   newTestFieldInt167 tlong 
  newTestFieldInt168 tlong   newTestFieldInt169 tlong 
  newTestFieldInt17 tlong   newTestFieldInt170 tlong 
  newTestFieldInt171 tlong   newTestFieldInt172 tlong 
  newTestFieldInt173 tlong   newTestFieldInt174 tlong 
  newTestFieldInt175 tlong   newTestFieldInt176 tlong 
  newTestFieldInt177 tlong   newTestFieldInt178 tlong 
  newTestFieldInt179 tlong   newTestFieldInt18 tlong  
 newTestFieldInt180 tlong   newTestFieldInt181 tlong 
  newTestFieldInt182 tlong   newTestFieldInt183 tlong 
  newTestFieldInt184 tlong   newTestFieldInt185 tlong 
  newTestFieldInt186 tlong   newTestFieldInt187 tlong 
  newTestFieldInt188 tlong   newTestFieldInt189 tlong 
  newTestFieldInt19 tlong   newTestFieldInt190 tlong 
  newTestFieldInt191 tlong   newTestFieldInt192 tlong 
  newTestFieldInt193 tlong   newTestFieldInt194 tlong 
  newTestFieldInt195 tlong   newTestFieldInt196 tlong 
  newTestFieldInt197 tlong   newTestFieldInt198 tlong 
  newTestFieldInt199 tl

[jira] [Comment Edited] (SOLR-8444) Merge facet telemetry information from shards

2016-01-14 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098626#comment-15098626
 ] 

Michael Sun edited comment on SOLR-8444 at 1/14/16 6:48 PM:


Without any change, Solr already merge facet telemetry from shards which is 
pretty nice.

Request
{code}
curl http://localhost:8228/solr/films/select -d 
'q=*:*&wt=json&indent=true&debugQuery=true&json.facet={
top_genre: {
  type:terms,
  field:genre,
  numBuckets:true
}
}'
{code}
Facet Telemetry for collection with one shard.
{code}
"facet-trace":{
  "processor":"FacetQueryProcessor",
  "elapse":0,
  "query":null,
  "sub-facet":[{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10}]},
{code}

Facet telemetry for collection with two shards. All information about each 
shard is already in. There is need to add shard name into each telemetry.
{code}
"facet-trace":{
  "processor":"FacetQueryProcessor",
  "elapse":0,
  "query":null,
  "domainSize":1100,
  "sub-facet":[{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10,
  "numBuckets":177,
  "domainSize":557},
{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10,
  "numBuckets":178,
  "domainSize":543}]},
{code}



was (Author: michael.sun):
Without any change, Solr already merge facet telemetry from shards which is 
pretty nice.

Request
{code}
curl http://localhost:8228/solr/films/select -d 
'q=*:*&wt=json&indent=true&debugQuery=true&json.facet={
top_genre: {
  type:terms,
  field:genre,
  numBuckets:true
}
}'
{code}
Facet Telemetry for collection with one shard.
{code}
"facet-trace":{
  "processor":"FacetQueryProcessor",
  "elapse":0,
  "query":null,
  "sub-facet":[{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10}]},
{code}

Facet telemetry for collection with two shards
{code}
"facet-trace":{
  "processor":"FacetQueryProcessor",
  "elapse":0,
  "query":null,
  "domainSize":1100,
  "sub-facet":[{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10,
  "numBuckets":177,
  "domainSize":557},
{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10,
  "numBuckets":178,
  "domainSize":543}]},
{code}


> Merge facet telemetry information from shards
> -
>
> Key: SOLR-8444
> URL: https://issues.apache.org/jira/browse/SOLR-8444
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
>
> This is to merge facet telemetry information from shards together. Here is 
> the way to merge different fields in facet telemetry.
> 1. elapse: sum of elapse fields in shard telemetry
> 2. domainSize: sum 
> 3. numBuckets: sum
> 4. other fields: skip in merging.
> In addition, the merged result contains a list of facet telemetry in each 
> shard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8444) Merge facet telemetry information from shards

2016-01-14 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098626#comment-15098626
 ] 

Michael Sun commented on SOLR-8444:
---

Without any change, Solr already merge facet telemetry from shards which is 
pretty nice.

Request
{code}
curl http://localhost:8228/solr/films/select -d 
'q=*:*&wt=json&indent=true&debugQuery=true&json.facet={
top_genre: {
  type:terms,
  field:genre,
  numBuckets:true
}
}'
{code}
Facet Telemetry for collection with one shard.
{code}
"facet-trace":{
  "processor":"FacetQueryProcessor",
  "elapse":0,
  "query":null,
  "sub-facet":[{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10}]},
{code}

Facet telemetry for collection with two shards
{code}
"facet-trace":{
  "processor":"FacetQueryProcessor",
  "elapse":0,
  "query":null,
  "domainSize":1100,
  "sub-facet":[{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10,
  "numBuckets":177,
  "domainSize":557},
{
  "processor":"FacetFieldProcessorUIF",
  "elapse":0,
  "field":"genre",
  "limit":10,
  "numBuckets":178,
  "domainSize":543}]},
{code}


> Merge facet telemetry information from shards
> -
>
> Key: SOLR-8444
> URL: https://issues.apache.org/jira/browse/SOLR-8444
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
>
> This is to merge facet telemetry information from shards together. Here is 
> the way to merge different fields in facet telemetry.
> 1. elapse: sum of elapse fields in shard telemetry
> 2. domainSize: sum 
> 3. numBuckets: sum
> 4. other fields: skip in merging.
> In addition, the merged result contains a list of facet telemetry in each 
> shard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7739) Lucene Classification Integration - UpdateRequestProcessor

2016-01-14 Thread David de Kleer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098609#comment-15098609
 ] 

David de Kleer edited comment on SOLR-7739 at 1/14/16 6:43 PM:
---

Hi Alessandro,

It turned out there was no problem at all, it was more that I didn't see that 
labels were being added (or not) in SOLR's output/the web interface. That's why 
I added a print statement and a slight modification to the 
ClassificationUpdateProcessor, something like

{code:java}
...
score = classificationResult.getScore();
System.out.println("(ClassificationUpdateProcessor) Found class " + 
assignedClass + " with score " + score + " for this document.");
doc.addField("label_" + assignedClass, score);
...
{code}

for example. The print statement and the searchable score (below 1) gave me an 
indication that labels were really being added. So again, it turned out it 
wasn't really a problem. And thanks again for updating the patch! (y)

EDIT: In fact, all of this means that I just wanted to make the distinction 
between annotations and automatically added labels/categories a bit more clear.

With kind regards,

David


was (Author: daviddekleer):
Hi Alessandro,

It turned out there was no problem at all, it was more that I didn't see that 
labels were being added (or not) in SOLR's output/the web interface. That's why 
I added a print statement and a slight modification to the 
ClassificationUpdateProcessor, something like

{code:java}
...
score = classificationResult.getScore();
System.out.println("(ClassificationUpdateProcessor) Found class " + 
assignedClass + " with score " + score + " for this document.");
doc.addField("label_" + assignedClass, score);
...
{code}

for example. The print statement and the searchable score (below 1) gave me an 
indication that labels were really being added. So again, it turned out it 
wasn't really a problem. And thanks again for updating the patch! (y)

EDIT: In fact, all of this means that I just wanted to make the distinction 
between annotated and automatically added labels/categories a bit more clear.

With kind regards,

David

> Lucene Classification Integration - UpdateRequestProcessor
> --
>
> Key: SOLR-7739
> URL: https://issues.apache.org/jira/browse/SOLR-7739
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Alessandro Benedetti
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: classification, index-time, update.chain, 
> updateProperties
> Attachments: SOLR-7739.patch, SOLR-7739.patch
>
>
> It would be nice to have an UpdateRequestProcessor to interact with the 
> Lucene Classification Module and provide an easy way of auto classifying Solr 
> Documents on indexing.
> Documentation will be provided with the patch
> A first design will be provided soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7739) Lucene Classification Integration - UpdateRequestProcessor

2016-01-14 Thread David de Kleer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098609#comment-15098609
 ] 

David de Kleer edited comment on SOLR-7739 at 1/14/16 6:42 PM:
---

Hi Alessandro,

It turned out there was no problem at all, it was more that I didn't see that 
labels were being added (or not) in SOLR's output/the web interface. That's why 
I added a print statement and a slight modification to the 
ClassificationUpdateProcessor, something like

{code:java}
...
score = classificationResult.getScore();
System.out.println("(ClassificationUpdateProcessor) Found class " + 
assignedClass + " with score " + score + " for this document.");
doc.addField("label_" + assignedClass, score);
...
{code}

for example. The print statement and the searchable score (below 1) gave me an 
indication that labels were really being added. So again, it turned out it 
wasn't really a problem. And thanks again for updating the patch! (y)

EDIT: In fact, all of this means that I just wanted to make the distinction 
between annotated and automatically added labels/categories a bit more clear.

With kind regards,

David


was (Author: daviddekleer):
Hi Alessandro,

It turned out there was no problem at all, it was more that I didn't see that 
labels were being added (or not) in SOLR's output/the web interface. That's why 
I added a print statement and a slight modification to the 
ClassificationUpdateProcessor, something like

{code:java}
...
score = classificationResult.getScore();
System.out.println("(ClassificationUpdateProcessor) Found class " + 
assignedClass + " with score " + score + " for this document.");
doc.addField("label_" + assignedClass, score);
...
{code}

for example. The print statement and the searchable score (below 1) gave me an 
indication that labels were really being added. So again, it turned out it 
wasn't really a problem. And thanks again for updating the patch! (y)

With kind regards,

David

> Lucene Classification Integration - UpdateRequestProcessor
> --
>
> Key: SOLR-7739
> URL: https://issues.apache.org/jira/browse/SOLR-7739
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Alessandro Benedetti
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: classification, index-time, update.chain, 
> updateProperties
> Attachments: SOLR-7739.patch, SOLR-7739.patch
>
>
> It would be nice to have an UpdateRequestProcessor to interact with the 
> Lucene Classification Module and provide an easy way of auto classifying Solr 
> Documents on indexing.
> Documentation will be provided with the patch
> A first design will be provided soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7739) Lucene Classification Integration - UpdateRequestProcessor

2016-01-14 Thread David de Kleer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098609#comment-15098609
 ] 

David de Kleer commented on SOLR-7739:
--

Hi Alessandro,

It turned out there was no problem at all, it was more that I didn't see that 
labels were being added (or not) in SOLR's output/the web interface. That's why 
I added a print statement and a slight modification to the 
ClassificationUpdateProcessor, something like

{code:java}
...
score = classificationResult.getScore();
System.out.println("(ClassificationUpdateProcessor) Found class " + 
assignedClass + " with score " + score + " for this document.");
doc.addField("label_" + assignedClass, score);
...
{code}

for example. The print statement and the searchable score (below 1) gave me an 
indication that labels were really being added. So again, it turned out it 
wasn't really a problem. And thanks again for updating the patch! (y)

With kind regards,

David

> Lucene Classification Integration - UpdateRequestProcessor
> --
>
> Key: SOLR-7739
> URL: https://issues.apache.org/jira/browse/SOLR-7739
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Alessandro Benedetti
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: classification, index-time, update.chain, 
> updateProperties
> Attachments: SOLR-7739.patch, SOLR-7739.patch
>
>
> It would be nice to have an UpdateRequestProcessor to interact with the 
> Lucene Classification Module and provide an easy way of auto classifying Solr 
> Documents on indexing.
> Documentation will be provided with the patch
> A first design will be provided soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6279) cores?action=UNLOAD can unregister unclosed core

2016-01-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098581#comment-15098581
 ] 

ASF subversion and git services commented on SOLR-6279:
---

Commit 1724668 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1724668 ]

SOLR-6279: cores?action=UNLOAD now waits for the core to close before 
unregistering it from ZK. (merge in revision 1724654 from trunk)

> cores?action=UNLOAD can unregister unclosed core
> 
>
> Key: SOLR-6279
> URL: https://issues.apache.org/jira/browse/SOLR-6279
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>
> baseline:
> {code}
>   /somewhere/instanceA/collection1_shard1/core.properties
>   /somewhere/instanceA/collection1_shard1/data
>   /somewhere/instanceA/collection1_shard2/core.properties
>   /somewhere/instanceA/collection1_shard2/data
>   /somewhere/instanceB
> {code}
> actions:
> {code}
>   curl 
> "http://host:port/solr/admin/cores?action=UNLOAD&core=collection1_shard2";
>   # since UNLOAD completed we should now be free to move the unloaded core's 
> files as we wish
>   mv /somewhere/instanceA/collection1_shard2 
> /somewhere/instanceB/collection1_shard2
> {code}
> expected result:
> {code}
>   /somewhere/instanceA/collection1_shard1/core.properties
>   /somewhere/instanceA/collection1_shard1/data
>   # collection1_shard2 files have been fully relocated
>   /somewhere/instanceB/collection1_shard2/core.properties.unloaded
>   /somewhere/instanceB/collection1_shard2/data
> {code}
> actual result:
> {code}
>   /somewhere/instanceA/collection1_shard1/core.properties
>   /somewhere/instanceA/collection1_shard1/data
>   /somewhere/instanceA/collection1_shard2/data
>   # collection1_shard2 files have not been fully relocated and/or some files 
> were left behind in instanceA because the UNLOAD action had returned prior to 
> the core being closed
>   /somewhere/instanceB/collection1_shard2/core.properties.unloaded
>   /somewhere/instanceB/collection1_shard2/data
> {code}
> +proposed fix:+ Changing CoreContainer.unload to wait for core to close 
> before unregistering it from ZK. Adding testMidUseUnload method to 
> TestLazyCores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8550) Add asynchronous streams to the Streaming API to facilitate alerting

2016-01-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8550:
-
Description: 
Currently all streams in the Streaming API are synchronously *pulled* by a 
client.

It would be great to add the capability to have Asyncronous streams that live 
within Solr that can *push* content as well. This would facilite very large 
scale alerting.

  was:
Currently all streams in the Streaming API are synchronously *pulled* from a 
client.

It would be great to add the capability to have Asyncronous streams that live 
within Solr that can *push* content as well. This would facilite very large 
scale alerting.


> Add asynchronous streams to the Streaming API to facilitate alerting
> 
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams in the Streaming API are synchronously *pulled* by a 
> client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite very large 
> scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [CI] Lucene 5x Linux 64 Test Only - Build # 78934 - Failure!

2016-01-14 Thread Michael McCandless
I suspect this is a similar failure to my last comment on
https://issues.apache.org/jira/browse/LUCENE-6956 again, a corner case in
the sandbox geo polygon APIs.

Nick can you confirm?

Is is possible there were Lucene sandbox geo bug fixes from trunk that we
didn't backport to 5.x?

Mike McCandless

On Thu, Jan 14, 2016 at 6:06 AM,  wrote:

> *BUILD FAILURE*
> Build URL
> http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/78934/
> Project:lucene_linux_java8_64_test_only Randomization: 
> JDKEA8,local,heap[979m],-server
> +UseSerialGC +UseCompressedOops +AggressiveOpts,sec manager on Date of
> build:Thu, 14 Jan 2016 11:57:40 +0100 Build duration:8 min 42 sec
> *CHANGES* No Changes
> *BUILD ARTIFACTS*
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J0-20160114_120557_901.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J1-20160114_120557_901.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J2-20160114_120557_901.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J3-20160114_120557_901.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J4-20160114_120557_901.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J5-20160114_120557_902.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J6-20160114_120557_902.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J7-20160114_120557_902.events
> 
> *FAILED JUNIT TESTS* Name: org.apache.lucene.bkdtree Failed: 1 test(s),
> Passed: 4 test(s), Skipped: 5 test(s), Total: 10 test(s)
> *- Failed: org.apache.lucene.bkdtree.TestBKDTree.testRandomMedium *
> *CONSOLE OUTPUT* [...truncated 10878 lines...] [junit4] [junit4] [junit4]
> JVM J0: 0.54 .. 8.15 = 7.62s [junit4] JVM J1: 0.79 .. 9.33 = 8.54s [junit4]
> JVM J2: 0.54 .. 12.59 = 12.06s [junit4] JVM J3: 0.29 .. 9.32 = 9.03s [junit4]
> JVM J4: 0.54 .. 12.10 = 11.56s [junit4] JVM J5: 0.29 .. 15.10 = 14.82s 
> [junit4]
> JVM J6: 0.54 .. 9.09 = 8.55s [junit4] JVM J7: 0.54 .. 14.10 = 13.56s [junit4]
> Execution time total: 15 seconds [junit4] Tests summary: 20 suites, 142
> tests, 1 error, 11 ignored (11 assumptions) BUILD FAILED 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:472:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:2240:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/module-build.xml:58:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1444:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1000:
> There were test failures: 20 suites, 142 tests, 1 error, 11 ignored (11
> assumptions) [seed: 215EF7D7DC231C40] Total time: 8 minutes 21 seconds Build
> step 'Invoke Ant' marked build as failure Archiving artifacts Recording
> test results [description-setter] Description set:
> JDKEA8,local,heap[979m],-server +UseSerialGC +UseCompressedOops
> +AggressiveOpts,sec manager on Email was triggered for: Failure - 1st Trigger
> Failure - Any was overridden by another trigger and will not send an email. 
> Trigger
> Failure - Still was overridden by another trigger and will not send an
> email. Sending email for trigger: Failure - 1st
>


[jira] [Updated] (SOLR-8550) Add asynchronous streams to the Streaming API to facilitate alerting

2016-01-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8550:
-
Description: 
Currently all streams in the Streaming API are synchronously *pulled* from a 
client.

It would be great to add the capability to have Asyncronous streams that live 
within Solr that can *push* content as well. This would facilite very large 
scale alerting.

  was:
Currently all streams are are synchronously *pulled* from a client.

It would be great to add the capability to have Asyncronous streams that live 
within Solr that can *push* content as well. This would facilite very large 
scale alerting.


> Add asynchronous streams to the Streaming API to facilitate alerting
> 
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams in the Streaming API are synchronously *pulled* from a 
> client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite very large 
> scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread Demian Katz (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098488#comment-15098488
 ] 

Demian Katz commented on SOLR-2798:
---

As GitHub Bot has pointed out, I've just opened PR #216 with these changes. 
Thanks again for your help, and please let me know if I can do any more to 
improve this.

> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Anshum Gupta
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098486#comment-15098486
 ] 

ASF GitHub Bot commented on SOLR-2798:
--

GitHub user demiankatz opened a pull request:

https://github.com/apache/lucene-solr/pull/216

Resolution for SOLR-2798 (add support for multi-valued localParams)

Previous to this fix, when using localParams syntax, a repeated parameter 
would be handled with a "last value wins" policy. This PR changes the behavior 
to allow all values of the parameter to be accepted, which seems like it should 
be the expected behavior, since many Solr parameters are intended to be 
repeatable.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/demiankatz/lucene-solr solr-2798-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/216.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #216


commit b991e72b04a4de4c35f4fa1dc1338eafb67756d8
Author: Demian Katz 
Date:   2016-01-13T19:43:13Z

Allow parseLocalParams to target a ModifiableSolrParams object.
- Progress on SOLR-2798.

commit 314a58c593de4cf7ed80bffc7b3e8034d17143be
Author: Demian Katz 
Date:   2016-01-13T20:06:37Z

Switch calls to parseLocalParams to use ModifiableSolrParams instead of Map.
- See SOLR-2798.

commit 107f6224aa33edea08a961adcb2e57b2cc96e69e
Author: Demian Katz 
Date:   2016-01-14T14:12:27Z

Deprecated obsolete methods; added comments.

commit ed544b6d726e4e79319cfb5808cc3fdfde6e0f26
Author: Demian Katz 
Date:   2016-01-14T14:51:24Z

Added higher-level test for localParams fix.




> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Anshum Gupta
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Resolution for SOLR-2798 (add support fo...

2016-01-14 Thread demiankatz
GitHub user demiankatz opened a pull request:

https://github.com/apache/lucene-solr/pull/216

Resolution for SOLR-2798 (add support for multi-valued localParams)

Previous to this fix, when using localParams syntax, a repeated parameter 
would be handled with a "last value wins" policy. This PR changes the behavior 
to allow all values of the parameter to be accepted, which seems like it should 
be the expected behavior, since many Solr parameters are intended to be 
repeatable.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/demiankatz/lucene-solr solr-2798-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/216.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #216


commit b991e72b04a4de4c35f4fa1dc1338eafb67756d8
Author: Demian Katz 
Date:   2016-01-13T19:43:13Z

Allow parseLocalParams to target a ModifiableSolrParams object.
- Progress on SOLR-2798.

commit 314a58c593de4cf7ed80bffc7b3e8034d17143be
Author: Demian Katz 
Date:   2016-01-13T20:06:37Z

Switch calls to parseLocalParams to use ModifiableSolrParams instead of Map.
- See SOLR-2798.

commit 107f6224aa33edea08a961adcb2e57b2cc96e69e
Author: Demian Katz 
Date:   2016-01-14T14:12:27Z

Deprecated obsolete methods; added comments.

commit ed544b6d726e4e79319cfb5808cc3fdfde6e0f26
Author: Demian Katz 
Date:   2016-01-14T14:51:24Z

Added higher-level test for localParams fix.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8550) Add asynchronous streams to the Streaming API to facilitate alerting

2016-01-14 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098459#comment-15098459
 ] 

Joel Bernstein edited comment on SOLR-8550 at 1/14/16 5:41 PM:
---

The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees an 
AsyncStream it will open it and just keep it around in a memory. The 
AsyncStream will have a thread that wakes up periodically and opens, reads, and 
closes it's underlying stream. Syntax would look like this:

{code}
async(alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facilitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.




was (Author: joel.bernstein):
The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees an 
AsyncStream it will open it and just keep it around in a memory. The 
AsyncStream will have a thread that wakes up periodically and opens, reads, and 
closes it's underlying stream. Syntax would look like this:

{code}
async(alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facelitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.



> Add asynchronous streams to the Streaming API to facilitate alerting
> 
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams are are synchronously *pulled* from a client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite very large 
> scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8550) Add asynchronous streams to the Streaming API to facilitate alerting

2016-01-14 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098459#comment-15098459
 ] 

Joel Bernstein edited comment on SOLR-8550 at 1/14/16 5:29 PM:
---

The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees an 
AsyncStream it will open it and just keep it around in a memory. The 
AsyncStream will have a thread that wakes up periodically and opens, reads, and 
closes it's underlying stream. Syntax would look like this:

{code}
async(alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facelitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.




was (Author: joel.bernstein):
The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees an 
AsyncStream it will open it and just keep it around in a memory. The 
AsyncStream will have a thread that wakes up periodically and opens, reads, and 
closes it's underlying stream. Syntax would look like this:

{code}
async{alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facelitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.



> Add asynchronous streams to the Streaming API to facilitate alerting
> 
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams are are synchronously *pulled* from a client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite very large 
> scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8496) Facet search count numbers are falsified by older document versions

2016-01-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098463#comment-15098463
 ] 

Hoss Man commented on SOLR-8496:


can we get some more details about your configs/schema? ... i'm trying to 
figure out enough details to be able to reproduce this.

Using a trivial test with the techproducts example, i can't seem to reproduce...

{noformat}
hossman@tray:~/lucene/5x_dev/solr$ bin/solr -e techproducts
...
hossman@tray:~/lucene/5x_dev/solr$ curl 
'http://localhost:8983/solr/techproducts/query?facet=true&facet.field=inStock&q=solr&omitHeader=true&rows=0'
{
  "response":{"numFound":1,"start":0,"docs":[]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{
  "inStock":[
"true",1,
"false",0]},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{},
"facet_heatmaps":{}}}
...
hossman@tray:~/lucene/5x_dev/solr$ bin/post -c techproducts 
example/exampledocs/solr.xml 
...
hossman@tray:~/lucene/5x_dev/solr$ curl 
'http://localhost:8983/solr/techproducts/query?facet=true&facet.field=inStock&q=solr&omitHeader=true&rows=0'
{
  "response":{"numFound":1,"start":0,"docs":[]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{
  "inStock":[
"true",1,
"false",0]},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{},
"facet_heatmaps":{}}}
hossman@tray:~/lucene/5x_dev/solr$ curl -sS 
'http://localhost:8983/solr/techproducts/admin/luke?wt=json&indent=true' | 
egrep "maxDoc|numDoc"
"numDocs":32,
"maxDoc":33,
{noformat}

> Facet search count numbers are falsified by older document versions
> ---
>
> Key: SOLR-8496
> URL: https://issues.apache.org/jira/browse/SOLR-8496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
> Environment: Linux 3.16.0-4-amd64 x86_64 Debian 8.2
> openjdk-7-jre-headless:amd64   version 7u91-2.6.3-1~deb8u1
> solr-5.4.0, extracted from official tar
> Default solr settings from install script:SOLR_HEAP="512m"
> GC_LOG_OPTS="-verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails \
> -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution 
> -XX:+PrintGCApplicationStoppedTime"
> GC_TUNE="-XX:NewRatio=3 \
> -XX:SurvivorRatio=4 \
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+UseParNewGC \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:+CMSScavengeBeforeRemark \
> -XX:PretenureSizeThreshold=64m \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=50 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled \
> -XX:+ParallelRefProcEnabled"
> SOLR_OPTS="$SOLR_OPTS -Xss256k"
>Reporter: Andreas Müller
>
> Our setup is based on multiple cores. In One core we have a multi-filed with 
> integer values. and some other unimportant fields. We're using multi-faceting 
> for this field.
> We're querying a test scenario with:
> {code}
> http://localhost:8983/solr/core-name/select?q=dummyask: (true) AND 
> manufacturer: false AND id: (15039 16882 10850 
> 20781)&fq={!tag=professions}professions: 
> (59)&fl=id&wt=json&indent=true&facet=true&facet.field={!ex=professions}professions
> {code}
> - Query: (numDocs:48545, maxDoc:48545)
> {code:xml}
> 
> 
> 0
> 1
> 
> 
> 
> 10850
> 
> 
> 16882
> 
> 
> 15039
> 
> 
> 20781
> 
> 
> 
> 
> 
> 
> 4
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> - Then we update one document and change some fields (numDocs:48545, 
> maxDoc:48546) *The number of maxDocs is increased*
> {code:xml}
> 
> 
> 0
> 1
> 
> 
> 
> 10850
> 
> 
> 16882
> 
> 
> 15039
> 
> 
> 20781
> 
> 
> 
> 
> 
> 
> 5
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> *The Problem:*
> In the first query, we're getting a facet count of 4, which is correct. After 
> updating one document, we're getting 5 as a result wich is not correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8550) Add asynchronous streams to the Streaming API to facilitate alerting

2016-01-14 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098459#comment-15098459
 ] 

Joel Bernstein commented on SOLR-8550:
--

The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees a AsyncStream 
it will open it and just keep it around in a memory. The AsyncStream will have 
a thread that wakes up periodically and opens, reads, and closes it's 
underlying stream. Syntax would look like this:

{code}
async{alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facelitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.



> Add asynchronous streams to the Streaming API to facilitate alerting
> 
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams are are synchronously *pulled* from a client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite very large 
> scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8550) Add asynchronous streams to the Streaming API to facilitate alerting

2016-01-14 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098459#comment-15098459
 ] 

Joel Bernstein edited comment on SOLR-8550 at 1/14/16 5:28 PM:
---

The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees an 
AsyncStream it will open it and just keep it around in a memory. The 
AsyncStream will have a thread that wakes up periodically and opens, reads, and 
closes it's underlying stream. Syntax would look like this:

{code}
async{alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facelitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.




was (Author: joel.bernstein):
The general design is to add a new AsyncStream which will be handled 
differently by the /stream handler. When the /stream handler sees a AsyncStream 
it will open it and just keep it around in a memory. The AsyncStream will have 
a thread that wakes up periodically and opens, reads, and closes it's 
underlying stream. Syntax would look like this:

{code}
async{alert())
{code}

The AlertStream would be a new stream created in a different ticket. 

Parallel async streams should work fine as well, facelitating very large scale 
alerting systems:
{code}
parallel(async(alert()))
{code}

In the parallel example the AsyncStream would be pushed to worker nodes where 
they would live.



> Add asynchronous streams to the Streaming API to facilitate alerting
> 
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams are are synchronously *pulled* from a client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite very large 
> scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8550) Add asynchronous streams to the Streaming API to facilitate alerting

2016-01-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8550:
-
Summary: Add asynchronous streams to the Streaming API to facilitate 
alerting  (was: Add asynchronous Streams to the Streaming API to facilitate 
alerting)

> Add asynchronous streams to the Streaming API to facilitate alerting
> 
>
> Key: SOLR-8550
> URL: https://issues.apache.org/jira/browse/SOLR-8550
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently all streams are are synchronously *pulled* from a client.
> It would be great to add the capability to have Asyncronous streams that live 
> within Solr that can *push* content as well. This would facilite very large 
> scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8550) Add asynchronous Streams to the Streaming API to facilitate alerting

2016-01-14 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8550:


 Summary: Add asynchronous Streams to the Streaming API to 
facilitate alerting
 Key: SOLR-8550
 URL: https://issues.apache.org/jira/browse/SOLR-8550
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein


Currently all streams are are synchronously *pulled* from a client.

It would be great to add the capability to have Asyncronous streams that live 
within Solr that can *push* content as well. This would facilite very large 
scale alerting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15552 - Failure!

2016-01-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15552/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:60783/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:60783/awholynewcollection_0: non ok status: 
500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([96654382E72046F4:1E317C5849DC2B0C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:511)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1774)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:644)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36

[jira] [Commented] (SOLR-2798) Local Param parsing does not support multivalued params

2016-01-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098413#comment-15098413
 ] 

Hoss Man commented on SOLR-2798:


bq. should I open a pull request, or is there anything more you would like me 
to do first?

ideally either open a pull request that refers to SOLR-2798 (so our git bot 
picks it up) or attach a comprehensive patch with all the changes.

I'll try to do a comprehensive review later today.

> Local Param parsing does not support multivalued params
> ---
>
> Key: SOLR-2798
> URL: https://issues.apache.org/jira/browse/SOLR-2798
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Anshum Gupta
>
> As noted by Demian on the solr-user mailing list, Local Param parsing seems 
> to use a "last one wins" approach when parsing multivalued params.
> In this example, the value of "111" is completely ignored:
> {code}
> http://localhost:8983/solr/select?debug=query&q={!dismax%20bq=111%20bq=222}foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6973) Improve TeeSinkTokenFilter

2016-01-14 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098394#comment-15098394
 ] 

Uwe Schindler commented on LUCENE-6973:
---

OK looks fine. I did not run tests, I wonder why the tests of CustomAnalyzer or 
similar did not catch the non-existent classes.

> Improve TeeSinkTokenFilter
> --
>
> Key: LUCENE-6973
> URL: https://issues.apache.org/jira/browse/LUCENE-6973
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6973.patch, LUCENE-6973.patch, LUCENE-6973.patch, 
> LUCENE-6973.patch, LUCENE-6973.patch
>
>
> {{TeeSinkTokenFilter}} can be improved in several ways, as it's written today:
> The most major one is removing {{SinkFilter}} which just doesn't work and is 
> confusing. E.g., if you set a {{SinkFilter}} which filters tokens, the 
> attributes on the stream such as {{PositionIncrementAttribute}} are not 
> updated. Also, if you update any attribute on the stream, you affect other 
> {{SinkStreams}} ... It's best if we remove this confusing class, and let 
> consumers reuse existing {{TokenFilters}} by chaining them to the sink stream.
> After we do that, we can make all the cached states a single (immutable) 
> list, which is shared between all the sink streams, so we don't need to keep 
> many references around, and also deal with {{WeakReference}}.
> Besides that there are some other minor improvements to the code that will 
> come after we clean up this class.
> From a backwards-compatibility standpoint, I don't think that {{SinkFilter}} 
> is actually used anywhere (since it just ... confusing and doesn't work as 
> expected), and therefore I believe it won't affect anyone. If however someone 
> did implement a {{SinkFilter}}, it should be trivial to convert it to a 
> {{TokenFilter}} and chain it to the {{SinkStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6279) cores?action=UNLOAD can unregister unclosed core

2016-01-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098385#comment-15098385
 ] 

ASF subversion and git services commented on SOLR-6279:
---

Commit 1724654 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1724654 ]

SOLR-6279: cores?action=UNLOAD now waits for the core to close before 
unregistering it from ZK.

> cores?action=UNLOAD can unregister unclosed core
> 
>
> Key: SOLR-6279
> URL: https://issues.apache.org/jira/browse/SOLR-6279
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>
> baseline:
> {code}
>   /somewhere/instanceA/collection1_shard1/core.properties
>   /somewhere/instanceA/collection1_shard1/data
>   /somewhere/instanceA/collection1_shard2/core.properties
>   /somewhere/instanceA/collection1_shard2/data
>   /somewhere/instanceB
> {code}
> actions:
> {code}
>   curl 
> "http://host:port/solr/admin/cores?action=UNLOAD&core=collection1_shard2";
>   # since UNLOAD completed we should now be free to move the unloaded core's 
> files as we wish
>   mv /somewhere/instanceA/collection1_shard2 
> /somewhere/instanceB/collection1_shard2
> {code}
> expected result:
> {code}
>   /somewhere/instanceA/collection1_shard1/core.properties
>   /somewhere/instanceA/collection1_shard1/data
>   # collection1_shard2 files have been fully relocated
>   /somewhere/instanceB/collection1_shard2/core.properties.unloaded
>   /somewhere/instanceB/collection1_shard2/data
> {code}
> actual result:
> {code}
>   /somewhere/instanceA/collection1_shard1/core.properties
>   /somewhere/instanceA/collection1_shard1/data
>   /somewhere/instanceA/collection1_shard2/data
>   # collection1_shard2 files have not been fully relocated and/or some files 
> were left behind in instanceA because the UNLOAD action had returned prior to 
> the core being closed
>   /somewhere/instanceB/collection1_shard2/core.properties.unloaded
>   /somewhere/instanceB/collection1_shard2/data
> {code}
> +proposed fix:+ Changing CoreContainer.unload to wait for core to close 
> before unregistering it from ZK. Adding testMidUseUnload method to 
> TestLazyCores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8444) Merge facet telemetry information from shards

2016-01-14 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8444:
--
Description: 
This is to merge facet telemetry information from shards together. Here is the 
way to merge different fields in facet telemetry.

1. elapse: sum of elapse fields in shard telemetry
2. domainSize: sum 
3. numBuckets: sum
4. other fields: skip in merging.

In addition, the merged result contains a list of facet telemetry in each shard.


  was:
This is to merge facet telemetry information from shards together. Here is the 
way to merge different fields in facet telemetry.

1. elapse: sum of elapse fields in shard telemetry
2. domainSize: sum 
3. numBuckets: sum
4. other fields: skip in merging.



> Merge facet telemetry information from shards
> -
>
> Key: SOLR-8444
> URL: https://issues.apache.org/jira/browse/SOLR-8444
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
>
> This is to merge facet telemetry information from shards together. Here is 
> the way to merge different fields in facet telemetry.
> 1. elapse: sum of elapse fields in shard telemetry
> 2. domainSize: sum 
> 3. numBuckets: sum
> 4. other fields: skip in merging.
> In addition, the merged result contains a list of facet telemetry in each 
> shard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8444) Merge facet telemetry information from shards

2016-01-14 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15095286#comment-15095286
 ] 

Michael Sun edited comment on SOLR-8444 at 1/14/16 4:51 PM:


bq. We need to also consider how to merge distributed debug info (and add more 
info about the distributed phase as well). Given this, (2) may be simpler 
(adding directly to facet response) as we already have a framework for merging. 
(From SOLR-8228)

Probably better to use a separate merger to merge facet telemetry information. 
The main reason is that facet telemetry merging may be different to facet 
result merging. For example, some fields in facet telemetry are not merged. In 
addition, the merged telemetry need to contain both merged numbers and 
telemetry from each shards.

cc [~yo...@apache.org]



was (Author: michael.sun):
bq. We need to also consider how to merge distributed debug info (and add more 
info about the distributed phase as well). Given this, (2) may be simpler 
(adding directly to facet response) as we already have a framework for merging. 
(From SOLR-8228)

Probably better to use a separate merger to merge facet telemetry information. 
The reason is some fields are not merged. Therefore the merged telemetry need 
to contain both merged numbers and telemetry from each shards.

cc [~yo...@apache.org]


> Merge facet telemetry information from shards
> -
>
> Key: SOLR-8444
> URL: https://issues.apache.org/jira/browse/SOLR-8444
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
>
> This is to merge facet telemetry information from shards together. Here is 
> the way to merge different fields in facet telemetry.
> 1. elapse: sum of elapse fields in shard telemetry
> 2. domainSize: sum 
> 3. numBuckets: sum
> 4. other fields: skip in merging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8549) Start script could check for cores that have failed to load

2016-01-14 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8549:

Attachment: SOLR-8549.patch

Simple patch

> Start script could check for cores that have failed to load 
> 
>
> Key: SOLR-8549
> URL: https://issues.apache.org/jira/browse/SOLR-8549
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Trivial
> Attachments: SOLR-8549.patch
>
>
> I ran into this situation where I had started the techproducts example. Then 
> when I restarted the example I made a mistake in my schema and the core 
> failed to load.
> On restart the start script didn't know that the core was already there and 
> tried to create it again . This failed to but I was left with no 
> example/techproducts/conf folder anymore
> Steps to reproduce :
> 1. ./bin/solr start -e techproducts;./bin/solr stop
> 2. make any mistake in any conf file under example/techproducts/conf/
> 3. ./bin/solr start -e techproducts
> At this point you'll see lots of errors and also no 
> example/techproducts/conf/ to fix the mistake



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8549) Start script could check for cores that have failed to load

2016-01-14 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-8549:
---

 Summary: Start script could check for cores that have failed to 
load 
 Key: SOLR-8549
 URL: https://issues.apache.org/jira/browse/SOLR-8549
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Varun Thacker
Priority: Trivial


I ran into this situation where I had started the techproducts example. Then 
when I restarted the example I made a mistake in my schema and the core failed 
to load.

On restart the start script didn't know that the core was already there and 
tried to create it again . This failed to but I was left with no 
example/techproducts/conf folder anymore

Steps to reproduce :
1. ./bin/solr start -e techproducts;./bin/solr stop
2. make any mistake in any conf file under example/techproducts/conf/
3. ./bin/solr start -e techproducts

At this point you'll see lots of errors and also no example/techproducts/conf/ 
to fix the mistake



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.4-Linux (64bit/jdk1.7.0_80) - Build # 375 - Failure!

2016-01-14 Thread Mark Miller
Hmm, does this use a hardcoded port?

On Thu, Jan 14, 2016 at 10:21 AM Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/375/
> Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC
>
> 1 tests failed.
> FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
>
> Error Message:
> Address already in use
>
> Stack Trace:
> java.net.BindException: Address already in use
> at
> __randomizedtesting.SeedInfo.seed([4F7D1EB3A3005383:72A5B09F9BEE0DF3]:0)
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:463)
> at sun.nio.ch.Net.bind(Net.java:455)
> at sun.nio.ch
> .ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch
> .ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:252)
> at
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:49)
> at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:525)
> at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$200(AbstractPollingIoAcceptor.java:67)
> at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:409)
> at
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> Build Log:
> [...truncated 10302 lines...]
>[junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithKerberosAlt
>[junit4]   2> 452769 WARN
> (TEST-TestSolrCloudWithKerberosAlt.testBasics-seed#[4F7D1EB3A3005383]) [
> ] o.a.d.s.c.DefaultDirectoryService You didn't change the admin password of
> directory service instance 'DefaultKrbServer'.  Please update the admin
> password as soon as possible to prevent a possible security breach.
>[junit4]   2> NOTE: reproduce with: ant test
> -Dtestcase=TestSolrCloudWithKerberosAlt -Dtests.method=testBasics
> -Dtests.seed=4F7D1EB3A3005383 -Dtests.multiplier=3 -Dtests.slow=true
> -Dtests.locale=lv_LV -Dtests.timezone=Asia/Amman -Dtests.asserts=true
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   12.9s J1 | TestSolrCloudWithKerberosAlt.testBasics <<<
>[junit4]> Throwable #1: java.net.BindException: Address already in
> use
>[junit4]>at
> __randomizedtesting.SeedInfo.seed([4F7D1EB3A3005383:72A5B09F9BEE0DF3]:0)
>[junit4]>at sun.nio.ch.Net.bind0(Native Method)
>[junit4]>at sun.nio.ch.Net.bind(Net.java:463)
>[junit4]>at sun.nio.ch.Net.bind(Net.java:455)
>[junit4]>at sun.nio.ch
> .ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>[junit4]>at sun.nio.ch
> .ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>[junit4]>at
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:252)
>[junit4]>at
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:49)
>[junit4]>at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:525)
>[junit4]>at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$200(AbstractPollingIoAcceptor.java:67)
>[junit4]>at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:409)
>[junit4]>at
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
>[junit4]>at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>[junit4]>at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: leaving temporary files on disk at:
> /home/jenkins/workspace/Lucene-Solr-5.4-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.TestSolrCloudWithKerberosAlt_4F7D1EB3A3005383-001
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene54): {},
> docValues:{}, sim=DefaultSimilarity, locale=lv_LV, timezone=Asia/Amman
>[junit4]   2> NOTE: Linux 3.19.0-42-generic amd64/Oracle Corporation
> 1.7.0_80 (64-bit)/cpus=12,threads=1,free=161310352,total=536870912
>[junit4]   2> NOTE: All tests run in this JVM: [SmileWriterTest,
> CursorMarkTest, TestSolrCLIRunExample, TestSolrDynamicMBean,
> SolrIndexSplitterTest, TestCopyFieldCollectionResource,
> TestJettySolrRunner, TestExactStatsCache, HighlighterMaxOffsetTest,
> SolrInfoMBeanTest, FileUtilsTest, Fa

Re: [VOTE] Release Lucene/Solr 5.3.2-RC1

2016-01-14 Thread Adrien Grand
+1 SUCCESS! [1:11:11.509082]

I also tried to run TestBackwardsCompatibility from the 5.4 branch on an
index generated by this release candidate, this did not catch problems.

Le mer. 13 janv. 2016 à 09:09, Adrien Grand  a écrit :

> Le mer. 13 janv. 2016 à 07:19, Ryan Ernst  a écrit :
>
>> While this isn't something we have tests for in
>> TestBackwardsCompatibility (that only tests every previous version against
>> the current version), we do have tests in TestVersion for parsing versions
>> that do not have constants (see testForwardsCompatibility). Version
>> constants are only shortcuts to Version objects with known values, not what
>> are passed around.
>>
>
> Thanks Ryan. So the version part should be fine at least.
>
> On Wed, Jan 13, 2016 at 2:31 AM, Yonik Seeley  wrote:
>>>
>> Seems like this versioning limitation should be fixed - we should
 always be free to create bugfix releases for past releases.

>>>
> While I agree it should not prevent us from releasing as it is something
> that we would need to do anyway for instance if we discover a serious
> corruption bug, it still puts us in an lesser known territory that means
> that we need to be more careful when testing the release.
>
> Most of the work has already been done so I don't think we should cancel
> this release, we just need to test more carefully, but this is something
> that would refrain me from proposing to do bugfix releases of previous
> minor releases again in the future, unless there is a major bug that needs
> to be adressed.
>


  1   2   >