[jira] [Commented] (SOLR-7034) Consider allowing any node to become leader, regardless of their last published state.

2018-02-20 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371062#comment-16371062
 ] 

Cao Manh Dat commented on SOLR-7034:


The latest patch removed the lir call in SolrCmdDistributor. 
[~shalinmangar] I think the patch is ready, can you review it?

> Consider allowing any node to become leader, regardless of their last 
> published state.
> --
>
> Key: SOLR-7034
> URL: https://issues.apache.org/jira/browse/SOLR-7034
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7034.patch, SOLR-7034.patch
>
>
> Now that we allow a min replication param for updates, I think it's time to 
> loosen this up. Currently, you can end up in a state where no one in a shard 
> thinks they can be leader and you so do this fast ugly infinite loop trying 
> to pick the leader.
> We should let anyone that is able to properly sync with the available 
> replicas to become leader if that process succeeds.
> The previous strategy was to account for the case of not having enough 
> replicas after a machine loss to ensure you don't lose the data. The idea was 
> that you should stop the cluster to avoid losing data and repair and get all 
> your replicas involved in a leadership election. Instead, we should favor 
> carrying on, and those that want to ensure they don't lose data due to major 
> replica loss should use the min replication update param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_162) - Build # 1399 - Still Unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1399/
Java: 32bit/jdk1.8.0_162 -server -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SystemLogListenerTest.test

Error Message:
Trigger was not fired 

Stack Trace:
java.lang.AssertionError: Trigger was not fired 
at 
__randomizedtesting.SeedInfo.seed([80CBEE26A1715FDA:89FD1FC0F8D3222]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.SystemLogListenerTest.test(SystemLogListenerTest.java:151)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([80CBEE26A1715FDA:ED374ADB1B39A0DD]:0)
at org.junit.Assert.fail(Assert.java:93)
  

[jira] [Updated] (SOLR-7034) Consider allowing any node to become leader, regardless of their last published state.

2018-02-20 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-7034:
---
Attachment: SOLR-7034.patch

> Consider allowing any node to become leader, regardless of their last 
> published state.
> --
>
> Key: SOLR-7034
> URL: https://issues.apache.org/jira/browse/SOLR-7034
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7034.patch, SOLR-7034.patch
>
>
> Now that we allow a min replication param for updates, I think it's time to 
> loosen this up. Currently, you can end up in a state where no one in a shard 
> thinks they can be leader and you so do this fast ugly infinite loop trying 
> to pick the leader.
> We should let anyone that is able to properly sync with the available 
> replicas to become leader if that process succeeds.
> The previous strategy was to account for the case of not having enough 
> replicas after a machine loss to ensure you don't lose the data. The idea was 
> that you should stop the cluster to avoid losing data and repair and get all 
> your replicas involved in a leadership election. Instead, we should favor 
> carrying on, and those that want to ensure they don't lose data due to major 
> replica loss should use the min replication update param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2018-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371042#comment-16371042
 ] 

Shawn Heisey commented on SOLR-11934:
-

Migrating from slf4j to log4j2, if that idea has support, should only happen in 
master.  I think it's too drastic a change for a minor release.


> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2018-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371040#comment-16371040
 ] 

Shawn Heisey commented on SOLR-11934:
-

Thought stream:

I think the default logfile rollover size of 4MB is WAY too small.  On a busy 
server, it can cycle through all ten logfiles in a hurry and what you're 
looking for may not even be available.  I increase this to 4GB on my installs, 
but this is perhaps far too large for a default.  I think it does need to be 
increased.  I wonder if 100MB is too large.  64MB?  Should we switch to 
time-based rollover instead?  Because the time-based logs have variable 
filenames, scripting becomes more difficult.  And on a busy server, one day of 
logging could be REALLY huge.

I'm wary of splitting the logging into separate files.  It sounds like a good 
idea to control verbosity, until you discover that you need to see the order of 
seven different log entries that all happened in the same millisecond, and 
because they're in multiple files, you have no idea what order they occurred.  
The solution to that would probably lead to duplication of logging entries in 
multiple files, so we're back to a situation where too much data is being 
logged.

Logging into a core/collection is an interesting idea, especially if there is a 
UI for querying it, and the problem of recursive logging storms can be 
prevented.  (probably by logging everything for that core/collection ONLY to a 
logfile)

Without doing some research, I'm neutral on the subject of intricate log4j2 
config details and how to use MDC more effectively.

Minor point: I disagree with the assertion [~gus_heck] made that ERROR can be 
looked at in the morning.  I would use that phrase to describe WARN.

I don't see any ability in slf4j to actually log at FATAL, and I've never seen 
any logs from Solr at that level.  The highest severity it seems to give the 
developer is ERROR.  I think there are probably some problems that should be 
logged at this level.  The log4j API has FATAL, but we're not using log4j in 
the code, only slf4j.

Perhaps as an Apache project, we should be eating our own dogfood, not using 
slf4j.  What do people think about the idea of a new issue that migrates slf4j 
to log4j2 and configures it programmatically from solr.xml?  The config could 
have a few very simple options (some for size-based rotation like we have now, 
some for time-based rotation, etc).  I wonder if there's possibly a way that we 
could tell it to configure from a log4j.xml file instead of those easy options, 
so expert users can still have complete control over the logging with all the 
capability that log4j normally has.


> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of whic

[jira] [Comment Edited] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371026#comment-16371026
 ] 

Mikhail Khludnev edited comment on SOLR-9510 at 2/21/18 7:07 AM:
-

new patch just has a sample: 
{code}
"q", "{!parent tag=top filters=$child.fq which=type_s:book v=$childquery}"
// tagging for exclusion 
, "childquery", "comment_t:*"
, "child.fq", "{!tag=author}author_s:dan"
, "child.fq", "{!tag=stars}stars_i:4"
, "fq", "{!tag=top}title_t:Snow\\ Crash"
 // tagging for exclusion 
.. , "json.facet", "{" +
"  ,comments_for_stars_parent_filter: {" +
"domain: { excludeTags:top, " +  // remove all 
parent scope filters and query
"  filter:[\"{!filters param=$child.fq  excludeTags=stars 
v=$childquery}\","  // apply child filters with excluding one
+ "\"{!child of=type_s:book}{!filters param=$fq}\"] 
}," +// apply parent scope filter joined to children
"type:terms," +
"field:stars_i," +
"facet: {" +
"   in_books: \"unique(_root_)\" }}"+  // aggregate counts
{code}

* here we can avoid potentially expensive {{blockChildren}}
* it shows like it make sense *TODO* to support {{filters}} and {{excludeTags}} 
in {{\{!child}} as well 


was (Author: mkhludnev):
new patch just has a sample: 
{code}
"q", "{!parent tag=top filters=$child.fq which=type_s:book v=$childquery}"
// tagging for exclusion 
, "childquery", "comment_t:*"
, "child.fq", "{!tag=author}author_s:dan"
, "child.fq", "{!tag=stars}stars_i:4"
, "fq", "{!tag=top}title_t:Snow\\ Crash"
 // tagging for exclusion 
.. , "json.facet", "{" +
"  ,comments_for_stars_parent_filter: {" +
"domain: { excludeTags:top, " +  // remove all 
parent scope filters and query
"  filter:[\"{!filters param=$child.fq  excludeTags=stars 
v=$childquery}\","  // apply child filters with excluding one
+ "\"{!child of=type_s:book}{!filters param=$fq}\"] 
}," +// apply parent scope filter joined to children
"type:terms," +
"field:stars_i," +
"facet: {" +
"   in_books: \"unique(_root_)\" }}"+  // aggregate counts
{code}

* here we can avoid potentially expensive {{blockChildren}}
* it shows like it make sense to support {{filters}} and {{excludeTags}} in 
{{\{!child}} as well 

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired 

[jira] [Comment Edited] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370630#comment-16370630
 ] 

Mikhail Khludnev edited comment on SOLR-9510 at 2/21/18 7:07 AM:
-

Here we go, current patch: 
* just adds {{filters}} into {{\{!parent}}, and 
* brand new {{\{!filters param=$chq}} mind the singular, btw, shouldn't this 
parameter is named {{ref}}?
* beside of that there is no changes in json.facets at all.
* the how-to is 
** tag the {{q=\{!parent tag=top}}
** have {{fq=type:parent}} 
** exclude it in {{domain:\{excludeTags:top}}}
** join expanded parents to children (might be a performance penalty)
** filter them again with filter exclusion {{filter:"\{!filters param=$chq 
excludeTags=color"}}

in addition to earlier *TODO* extract {{excludeTags}} code reuse it between bjq 
and fiters, btw, can bjq be descendant of that {{\{!filters}}?  

[~werder] the difference between {{global}} and {{excludeTags=top} is that 
former selects {{*:*}} and exclusion might end-up with MatchNoDocs.   


was (Author: mkhludnev):
Here we go, current patch: 
* just adds {{filters}} into {{\{!parent}}, and 
* brand new {{\{!filters param=$chq}} mind the singular, btw, shouldn't this 
parameter is named {{ref}}?
* beside of that there is no changes in json.facets at all.
* the how-to is 
** tag the {{q=\{!parent tag=top}}
** have {{fq=type:parent}} 
** exclude it in {{domain:\{excludeTags:top}}}
** join expanded parents to children (might be a performance penalty)
** filter them again with filter exclusion {{filter:"\{!filters param=$chq 
excludeTags=color"}}

in addition to earlier TODO extract {{excludeTags}} code reuse it between bjq 
and fiters, btw, can bjq be descendant of that {{\{!filters}}?  

[~werder] the difference between {{global}} and {{excludeTags=top} is that 
former selects {{*:*}} and exclusion might end-up with MatchNoDocs.   

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!

[jira] [Comment Edited] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371026#comment-16371026
 ] 

Mikhail Khludnev edited comment on SOLR-9510 at 2/21/18 7:05 AM:
-

new patch just has a sample: 
{code}
"q", "{!parent tag=top filters=$child.fq which=type_s:book v=$childquery}"
// tagging for exclusion 
, "childquery", "comment_t:*"
, "child.fq", "{!tag=author}author_s:dan"
, "child.fq", "{!tag=stars}stars_i:4"
, "fq", "{!tag=top}title_t:Snow\\ Crash"
 // tagging for exclusion 
.. , "json.facet", "{" +
"  ,comments_for_stars_parent_filter: {" +
"domain: { excludeTags:top, " +  // remove all 
parent scope filters and query
"  filter:[\"{!filters param=$child.fq  excludeTags=stars 
v=$childquery}\","  // apply child filters with excluding one
+ "\"{!child of=type_s:book}{!filters param=$fq}\"] 
}," +// apply parent scope filter joined to children
"type:terms," +
"field:stars_i," +
"facet: {" +
"   in_books: \"unique(_root_)\" }}"+  // aggregate counts
{code}

* here we can avoid potentially expensive {{blockChildren}}
* it shows like it make sense to support {{filters}} and {{excludeTags}} in 
{{\{!child}} as well 


was (Author: mkhludnev):
new patch just has a sample: 
{code}
"q", "{!parent tag=top filters=$child.fq which=type_s:book v=$childquery}"
, "childquery", "comment_t:*"
, "child.fq", "{!tag=author}author_s:dan"
, "child.fq", "{!tag=stars}stars_i:4"
, "fq", "{!tag=top}title_t:Snow\\ Crash"
.. , "json.facet", "{" +
"  ,comments_for_stars_parent_filter: {" +
"domain: { excludeTags:top, " +  // remove all 
parent scope filters and query
"  filter:[\"{!filters param=$child.fq  excludeTags=stars 
v=$childquery}\","  // apply child filters with excluding one
+ "\"{!child of=type_s:book}{!filters param=$fq}\"] 
}," +// apply parent scope filter joined to children
"type:terms," +
"field:stars_i," +
"facet: {" +
"   in_books: \"unique(_root_)\" }}"+  // aggregate counts
{code}

* here we can avoid potentially expensive {{blockChildren}}
* it shows like it make sense to support {{filters}} and {{excludeTags}} in 
{{\{!child}} as well 

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity.   

[jira] [Commented] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371026#comment-16371026
 ] 

Mikhail Khludnev commented on SOLR-9510:


new patch just has a sample: 
{code}
"q", "{!parent tag=top filters=$child.fq which=type_s:book v=$childquery}"
, "childquery", "comment_t:*"
, "child.fq", "{!tag=author}author_s:dan"
, "child.fq", "{!tag=stars}stars_i:4"
, "fq", "{!tag=top}title_t:Snow\\ Crash"
.. , "json.facet", "{" +
"  ,comments_for_stars_parent_filter: {" +
"domain: { excludeTags:top, " +  // remove all 
parent scope filters and query
"  filter:[\"{!filters param=$child.fq  excludeTags=stars 
v=$childquery}\","  // apply child filters with excluding one
+ "\"{!child of=type_s:book}{!filters param=$fq}\"] 
}," +// apply parent scope filter joined to children
"type:terms," +
"field:stars_i," +
"facet: {" +
"   in_books: \"unique(_root_)\" }}"+  // aggregate counts
{code}

* here we can avoid potentially expensive {{blockChildren}}
* it shows like it make sense to support {{filters}} and {{excludeTags}} in 
{{\{!child}} as well 

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-02-20 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371018#comment-16371018
 ] 

Varun Thacker edited comment on SOLR-7887 at 2/21/18 6:57 AM:
--

More updates to the patch.

There is a bug which i'm looking into next :

When calling Log4j2Watcher#setLevelwhere category is 
"org.apache.solr.update.processor" the LoggerConfig  returned is "root" so if 
we set it to DEBUG then the root logger changes and not just the individual 
class/package
{noformat}
LoggerConfig loggerConfig = getLoggerConfig(ctx, category);{noformat}


was (Author: varunthacker):
More updates to the patch.


There is a bug right now which i'm looking into next :

When calling Log4j2Watcher#setLevelwhere category is 
"org.apache.solr.update.processor" the LoggerConfig  returned is "root" so if 
we set it to DEBUG then the root logger changes and not just the individual 
class/package

 
{noformat}
LoggerConfig loggerConfig = getLoggerConfig(ctx, category);{noformat}

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-02-20 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371018#comment-16371018
 ] 

Varun Thacker edited comment on SOLR-7887 at 2/21/18 6:57 AM:
--

More updates to the patch.

There is a bug which i'm looking into next :

When calling Log4j2Watcher#setLevel where category is 
"org.apache.solr.update.processor" the LoggerConfig  returned is "root" so if 
we set it to DEBUG then the root logger changes and not just the individual 
class/package
{noformat}
LoggerConfig loggerConfig = getLoggerConfig(ctx, category);{noformat}


was (Author: varunthacker):
More updates to the patch.

There is a bug which i'm looking into next :

When calling Log4j2Watcher#setLevelwhere category is 
"org.apache.solr.update.processor" the LoggerConfig  returned is "root" so if 
we set it to DEBUG then the root logger changes and not just the individual 
class/package
{noformat}
LoggerConfig loggerConfig = getLoggerConfig(ctx, category);{noformat}

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-02-20 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371018#comment-16371018
 ] 

Varun Thacker commented on SOLR-7887:
-

More updates to the patch.


There is a bug right now which i'm looking into next :

When calling Log4j2Watcher#setLevelwhere category is 
"org.apache.solr.update.processor" the LoggerConfig  returned is "root" so if 
we set it to DEBUG then the root logger changes and not just the individual 
class/package

 
{noformat}
LoggerConfig loggerConfig = getLoggerConfig(ctx, category);{noformat}

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-02-20 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7887:

Attachment: SOLR-7887.patch

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9510:
---
Attachment: SOLR_9510.patch

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2018-02-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Attachment: SOLR-11959.patch

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, SolrCloud
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2018-02-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Attachment: SOLR-11836.patch

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, SolrCloud
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2018-02-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Attachment: (was: SOLR-11836.patch)

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, SolrCloud
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2018-02-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Attachment: (was: SOLR-11959.patch)

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, SolrCloud
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 75 - Still Unstable

2018-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/75/

[...truncated 31 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2353/consoleText

[repro] Revision: a31a8dae2e8a40c5c6a7c7a07ab4a7c27e4f7ed6

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=A83ABF3BE9CAEDB3 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=en-IN 
-Dtests.timezone=Asia/Thimbu -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestReplicationHandler 
-Dtests.method=doTestIndexAndConfigReplication -Dtests.seed=A83ABF3BE9CAEDB3 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=es-SV 
-Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=PeerSyncReplicationTest 
-Dtests.method=test -Dtests.seed=A83ABF3BE9CAEDB3 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=fr -Dtests.timezone=America/Moncton 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ReplaceNodeNoTargetTest 
-Dtests.method=test -Dtests.seed=A83ABF3BE9CAEDB3 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ca -Dtests.timezone=Arctic/Longyearbyen 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestContentStreamDataSource 
-Dtests.method=testCommitWithin -Dtests.seed=7C0C821FF129BF1A 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=tr 
-Dtests.timezone=Asia/Ulaanbaatar -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
a9f0272380438df88d29ed7c41572136f999f8db
[repro] git checkout a31a8dae2e8a40c5c6a7c7a07ab4a7c27e4f7ed6

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestTriggerIntegration
[repro]   PeerSyncReplicationTest
[repro]   ReplaceNodeNoTargetTest
[repro]   TestReplicationHandler
[repro]solr/contrib/dataimporthandler
[repro]   TestContentStreamDataSource
[repro] ant compile-test

[...truncated 3293 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.TestTriggerIntegration|*.PeerSyncReplicationTest|*.ReplaceNodeNoTargetTest|*.TestReplicationHandler"
 -Dtests.showOutput=onerror -Dtests.seed=A83ABF3BE9CAEDB3 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=en-IN -Dtests.timezone=Asia/Thimbu 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 6206 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 561 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestContentStreamDataSource" -Dtests.showOutput=onerror 
-Dtests.seed=7C0C821FF129BF1A -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=tr -Dtests.timezone=Asia/Ulaanbaatar -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 73 lines...]
[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.PeerSyncReplicationTest
[repro]   0/5 failed: org.apache.solr.handler.TestReplicationHandler
[repro]   0/5 failed: 
org.apache.solr.handler.dataimport.TestContentStreamDataSource
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   4/5 failed: org.apache.solr.cloud.ReplaceNodeNoTargetTest
[repro] git checkout a9f0272380438df88d29ed7c41572136f999f8db

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12010) Better error handling for Parallel SQL queries

2018-02-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12010:

Attachment: SOLR-12010.patch

> Better error handling for Parallel SQL queries
> --
>
> Key: SOLR-12010
> URL: https://issues.apache.org/jira/browse/SOLR-12010
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: master (8.0)
>Reporter: Amrit Sarkar
>Priority: Minor
> Attachments: SOLR-12010.patch
>
>
> While building examples in Parallel SQL queries, we encountered strange error 
> messages which didn't make sense unless you look deeply into the code, in my 
> case, using debugger on source code to understand better.
> e.g. 
> {code}
> curl --data-urlencode 'stmt=select emp_no_s,emp_no_s from salaries 
> group by emp_no_s
> limit 10' 
> http://localhost:8983/solr/employees/sql?aggregationMode=map_reduce
> {code}
> aggregate-field 'emp_no_s' is asked to group-by twice and hence this is 
> runtime SQL error, while error message received in solr logs:
> {code}
> Caused by: java.sql.SQLException: Error while executing SQL "select 
> emp_no_s,emp_no_s from salaries 
> group by emp_no_s
> limit 10": 1
> at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
> at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
> at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
> at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:269)
> ... 41 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.handler.sql.SolrTable.buildBuckets(SolrTable.java:559)
> at 
> org.apache.solr.handler.sql.SolrTable.handleGroupByMapReduce(SolrTable.java:445)
> at org.apache.solr.handler.sql.SolrTable.query(SolrTable.java:135)
> at org.apache.solr.handler.sql.SolrTable.access$100(SolrTable.java:64)
> at 
> org.apache.solr.handler.sql.SolrTable$SolrQueryable.query(SolrTable.java:859)
> at Baz.bind(Unknown Source)
> at 
> org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(CalcitePrepare.java:335)
> at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(CalciteConnectionImpl.java:294)
> at 
> org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(CalciteMetaImpl.java:559)
> at 
> org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(CalciteMetaImpl.java:550)
> at 
> org.apache.calcite.avatica.AvaticaResultSet.execute(AvaticaResultSet.java:204)
> at 
> org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:67)
> at 
> org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:44)
> at 
> org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:630)
> at 
> org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:607)
> at 
> org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
> at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
> ... 43 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12010) Better error handling for Parallel SQL queries

2018-02-20 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371005#comment-16371005
 ] 

Amrit Sarkar commented on SOLR-12010:
-

Patch uploaded with no tests. WIP.

> Better error handling for Parallel SQL queries
> --
>
> Key: SOLR-12010
> URL: https://issues.apache.org/jira/browse/SOLR-12010
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: master (8.0)
>Reporter: Amrit Sarkar
>Priority: Minor
> Attachments: SOLR-12010.patch
>
>
> While building examples in Parallel SQL queries, we encountered strange error 
> messages which didn't make sense unless you look deeply into the code, in my 
> case, using debugger on source code to understand better.
> e.g. 
> {code}
> curl --data-urlencode 'stmt=select emp_no_s,emp_no_s from salaries 
> group by emp_no_s
> limit 10' 
> http://localhost:8983/solr/employees/sql?aggregationMode=map_reduce
> {code}
> aggregate-field 'emp_no_s' is asked to group-by twice and hence this is 
> runtime SQL error, while error message received in solr logs:
> {code}
> Caused by: java.sql.SQLException: Error while executing SQL "select 
> emp_no_s,emp_no_s from salaries 
> group by emp_no_s
> limit 10": 1
> at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
> at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
> at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
> at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:269)
> ... 41 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.handler.sql.SolrTable.buildBuckets(SolrTable.java:559)
> at 
> org.apache.solr.handler.sql.SolrTable.handleGroupByMapReduce(SolrTable.java:445)
> at org.apache.solr.handler.sql.SolrTable.query(SolrTable.java:135)
> at org.apache.solr.handler.sql.SolrTable.access$100(SolrTable.java:64)
> at 
> org.apache.solr.handler.sql.SolrTable$SolrQueryable.query(SolrTable.java:859)
> at Baz.bind(Unknown Source)
> at 
> org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(CalcitePrepare.java:335)
> at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(CalciteConnectionImpl.java:294)
> at 
> org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(CalciteMetaImpl.java:559)
> at 
> org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(CalciteMetaImpl.java:550)
> at 
> org.apache.calcite.avatica.AvaticaResultSet.execute(AvaticaResultSet.java:204)
> at 
> org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:67)
> at 
> org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:44)
> at 
> org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:630)
> at 
> org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:607)
> at 
> org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
> at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
> ... 43 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2354 - Still unstable

2018-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2354/

5 tests failed.
FAILED:  org.apache.solr.cloud.TestUtilizeNode.test

Error Message:
no replica should be present in  127.0.0.1:34279_solr

Stack Trace:
java.lang.AssertionError: no replica should be present in  127.0.0.1:34279_solr
at 
__randomizedtesting.SeedInfo.seed([232BF95961C8BCC:8A66804F38E0E634]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.TestUtilizeNode.test(TestUtilizeNode.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost

Error Message:
The operations computed by ComputePlanAction should not be null 
SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
BEFORE_ACTION=[compute_plan, null]}

Stack Trace:
java.lang.AssertionError: The operations computed by C

[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2018-02-20 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371002#comment-16371002
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Uploaded patch will documentation changes reporting limitation of CDCR with 
Authentication plugins.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, SolrCloud
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2018-02-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Attachment: SOLR-11959.patch

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, SolrCloud
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+43) - Build # 21502 - Still Unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21502/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([8540452CE3162D82:D147AF64DEA407A]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([8540452CE3162D82:7C0DD683DF636008]:0)
at org.junit.Assert.fail(Assert.java:93)
at o

[jira] [Commented] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370977#comment-16370977
 ] 

Mikhail Khludnev commented on SOLR-9510:


Oh, really?!!! But this is really cool. It means we can recompute child docset 
just in domain.filter avoiding potentially expensive blockChildren. Let me 
check.  

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reg: Solr OOM

2018-02-20 Thread Shawn Heisey
When Varun suggested you contact the mailing list in SOLR-12009, he 
meant the solr-user mailing list.  This mailing list is for discussion 
about development of Lucene and Solr, not for support, bugs, etc.  Those 
kinds of discussions belong on the user list.


If this discussion requires more interaction, it should be moved to the 
user list, or possibly to the IRC channel.


On 2/20/2018 7:33 PM, BigData dev wrote:
We are seeing Solr OOM exceptions, when we issue a query to solr 
collection.


The below is the stack trace we are seeing:


The stacktrace for OOME frequently has absolutely no relation to the 
part of the program that has filled the heap.  It is merely the part of 
the program that was executing at the moment that no more heap was 
available.  I couldn't tell if you were sharing the stacktrace from the 
logged exception or from the heap analysis.


If I'm reading the heap analysis correctly, it shows about 15GB of 
memory allocated by an array of IntersectTermsEnumFrameobjects. 
Unfortunately, this class has no javadocs, and I am not familiar enough 
with the Lucene API to know what it's used for.  The class visibility 
appears to be package, not public, so it seems to be an internal 
implementation detail, which is probably why there are no javadocs.



And from the head-dump analysis, we see the 2 major causes for OOM are:




For the first one, we are not sure of sudden spike in the memory, 
where as for second one from the 
jira(https://issues.apache.org/jira/browse/SOLR-12009) we got 
information that we need to enable docValues.


I'm guessing that one of two things is happening here, and it could be both:

1) Your index is so big that you're going to need a larger heap.
2) The types of queries you are sending to Solr are very memory-hungry.

For the second problem, if you can reduce the query complexity, you 
might not need as much heap.  Adding docValues as you were advised 
(which requires a complete reindex) might also help, depending on the 
nature of the queries.


For the first problem, more information is necessary. Here's a list of 
questions:


What is your max heap?  I'm guessing it is at least 18GB, based on the 
two highlighted entries from heap analysis.  It may be even larger, but 
I can't tell for sure.  18GB is a pretty big heap.


How much total memory is in the server?  What OS is it running?

Is there software other than Solr running on the machine?

How many documents are in the core?

How big is the core on disk?

If there are multiple cores in the Solr instance, I will need the 
previous two pieces of information for all of them.


Do you know how many queries per second Solr is handling? If there is 
ongoing indexing, do you know how many documents per second are being 
added/updated, and do you know how often commits that open a new 
searcher are happening?  Sharing solrconfig.xml would be a good 
proactive step, and we may need to see Solr's logfile.  Use a paste 
website or a file-sharing service for this. Attachments rarely make it 
to the list.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12010) Better error handling for Parallel SQL queries

2018-02-20 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-12010:
---

 Summary: Better error handling for Parallel SQL queries
 Key: SOLR-12010
 URL: https://issues.apache.org/jira/browse/SOLR-12010
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Parallel SQL
Affects Versions: master (8.0)
Reporter: Amrit Sarkar


While building examples in Parallel SQL queries, we encountered strange error 
messages which didn't make sense unless you look deeply into the code, in my 
case, using debugger on source code to understand better.

e.g. 
{code}
curl --data-urlencode 'stmt=select emp_no_s,emp_no_s from salaries 
group by emp_no_s
limit 10' 
http://localhost:8983/solr/employees/sql?aggregationMode=map_reduce
{code}

aggregate-field 'emp_no_s' is asked to group-by twice and hence this is runtime 
SQL error, while error message received in solr logs:

{code}
Caused by: java.sql.SQLException: Error while executing SQL "select 
emp_no_s,emp_no_s from salaries 
group by emp_no_s
limit 10": 1
at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
at 
org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:269)
... 41 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.solr.handler.sql.SolrTable.buildBuckets(SolrTable.java:559)
at 
org.apache.solr.handler.sql.SolrTable.handleGroupByMapReduce(SolrTable.java:445)
at org.apache.solr.handler.sql.SolrTable.query(SolrTable.java:135)
at org.apache.solr.handler.sql.SolrTable.access$100(SolrTable.java:64)
at 
org.apache.solr.handler.sql.SolrTable$SolrQueryable.query(SolrTable.java:859)
at Baz.bind(Unknown Source)
at 
org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(CalcitePrepare.java:335)
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(CalciteConnectionImpl.java:294)
at 
org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(CalciteMetaImpl.java:559)
at 
org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(CalciteMetaImpl.java:550)
at 
org.apache.calcite.avatica.AvaticaResultSet.execute(AvaticaResultSet.java:204)
at 
org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:67)
at 
org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:44)
at 
org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:630)
at 
org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:607)
at 
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
... 43 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 467 - Still Unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/467/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterDeleteByQuery

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexWriterDeleteByQuery_A180015643123DAD-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexWriterDeleteByQuery_A180015643123DAD-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexWriterDeleteByQuery_A180015643123DAD-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexWriterDeleteByQuery_A180015643123DAD-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexWriterDeleteByQuery_A180015643123DAD-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexWriterDeleteByQuery_A180015643123DAD-001\tempDir-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexWriterDeleteByQuery_A180015643123DAD-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexWriterDeleteByQuery_A180015643123DAD-001

at __randomizedtesting.SeedInfo.seed([A180015643123DAD]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestSleepingLockWrapper

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSleepingLockWrapper_A180015643123DAD-001\tempDir-006:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSleepingLockWrapper_A180015643123DAD-001\tempDir-006

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSleepingLockWrapper_A180015643123DAD-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSleepingLockWrapper_A180015643123DAD-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSleepingLockWrapper_A180015643123DAD-001\tempDir-006:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSleepingLockWrapper_A180015643123DAD-001\tempDir-006
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSleepingLockWrapper_A180015643123DAD-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSleepingLockWrapper_A180015643123DAD-001

at __randomizedtesting.SeedInfo.seed([A180015643123DAD]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.

[jira] [Commented] (SOLR-7034) Consider allowing any node to become leader, regardless of their last published state.

2018-02-20 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370927#comment-16370927
 ] 

Cao Manh Dat commented on SOLR-7034:


[~shalinmangar] patch for this issue using SOLR-11702 term value, term value is 
good enough to determine that a replica is able to become a leader or not.

> Consider allowing any node to become leader, regardless of their last 
> published state.
> --
>
> Key: SOLR-7034
> URL: https://issues.apache.org/jira/browse/SOLR-7034
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7034.patch
>
>
> Now that we allow a min replication param for updates, I think it's time to 
> loosen this up. Currently, you can end up in a state where no one in a shard 
> thinks they can be leader and you so do this fast ugly infinite loop trying 
> to pick the leader.
> We should let anyone that is able to properly sync with the available 
> replicas to become leader if that process succeeds.
> The previous strategy was to account for the case of not having enough 
> replicas after a machine loss to ensure you don't lose the data. The idea was 
> that you should stop the cluster to avoid losing data and repair and get all 
> your replicas involved in a leadership election. Instead, we should favor 
> carrying on, and those that want to ensure they don't lose data due to major 
> replica loss should use the min replication update param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7034) Consider allowing any node to become leader, regardless of their last published state.

2018-02-20 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-7034:
---
Attachment: SOLR-7034.patch

> Consider allowing any node to become leader, regardless of their last 
> published state.
> --
>
> Key: SOLR-7034
> URL: https://issues.apache.org/jira/browse/SOLR-7034
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7034.patch
>
>
> Now that we allow a min replication param for updates, I think it's time to 
> loosen this up. Currently, you can end up in a state where no one in a shard 
> thinks they can be leader and you so do this fast ugly infinite loop trying 
> to pick the leader.
> We should let anyone that is able to properly sync with the available 
> replicas to become leader if that process succeeds.
> The previous strategy was to account for the case of not having enough 
> replicas after a machine loss to ensure you don't lose the data. The idea was 
> that you should stop the cluster to avoid losing data and repair and get all 
> your replicas involved in a leadership election. Instead, we should favor 
> carrying on, and those that want to ensure they don't lose data due to major 
> replica loss should use the min replication update param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11978) include SortableTextField in _default and sample_techproducts configsets

2018-02-20 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370920#comment-16370920
 ] 

Hoss Man commented on SOLR-11978:
-

[~varunthacker]: i have no strong feelings – feel free to change as you see fit.

> include SortableTextField in _default and sample_techproducts configsets
> 
>
> Key: SOLR-11978
> URL: https://issues.apache.org/jira/browse/SOLR-11978
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11978.patch
>
>
> since SortableTextField defaults to docValues="true" it has additional on 
> disk overhead compared to TextField that means I don't think we should 
> completley replace all suggested uses of TextField at this point – but it 
> would still be good to include it in our configsets similar to the way we 
> include declarations for a variety of text analysis options.
> I also think several "explicit" fields in the techproducts schema would 
> benefit from using this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2018-02-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11724:

Attachment: SOLR-11724.patch

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.1
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11968) Multi-words query time synonyms

2018-02-20 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370916#comment-16370916
 ] 

Robert Muir commented on SOLR-11968:


I think the issue is still valid, its a little more complex now because of 
positionLength (means more buffering when you see posLength > 1, because you'll 
need to adjust if you remove something in its path), but the idea is the same: 
give the user a choice between "insert mode" and "replace mode". But this new 
"insert mode" would actually work correctly, correcting posLengths before and 
posIncs after as appropriate. similar to how your editor might have to 
recompute some line breaks/word wrapping and so on.

If you have baseball (length=2), base(length=1), ball(length=1), and you delete 
"base" in this case, you need to change baseball's length to 1 before you omit 
it, because you deleted base. Thats the "buffering before" that would be 
required for posLength. And you still need the same buffering described on the 
issue for posInc=0 that might occur after the fact, so you don't wrongly 
transfer synonyms to different words entirely.

It would be slower than "replace mode" that we have today, but only because of 
the buffering, and I think its pretty contained, but I haven't fully thought it 
thru or tried to write any code.

> Multi-words query time synonyms
> ---
>
> Key: SOLR-11968
> URL: https://issues.apache.org/jira/browse/SOLR-11968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers, Schema and Analysis
>Affects Versions: master (8.0), 6.6.2
> Environment: Centos 7.x
>Reporter: Dominique Béjean
>Priority: Major
>
> I am trying multi words query time synonyms with Solr 6.6.2 and 
> SynonymGraphFilterFactory filter as explain in this article
>  
> [https://lucidworks.com/2017/04/18/multi-word-synonyms-solr-adds-query-time-support/]
>   
>  My field type is :
> {code:java}
> 
>      
>        
>                      articles="lang/contractions_fr.txt"/>
>        
>        
>         ignoreCase="true"/>
>        
>      
>      
>        
>                      articles="lang/contractions_fr.txt"/>
>        
>                      ignoreCase="true" expand="true"/>
>        
>         ignoreCase="true"/>
>        
>      
>    {code}
>  
>  synonyms.txt contains the line :
> {code:java}
> om, olympique de marseille{code}
>  
>  stopwords.txt contains the word 
> {code:java}
> de{code}
>  
>  The order of words in my query has an impact on the generated query in 
> edismax
> {code:java}
> q={!edismax qf='name_text_gp' v=$qq}
>  &sow=false
>  &qq=...{code}
> with "qq=om maillot" or "qq=olympique de marseille maillot", I can see the 
> synonyms expansion. It is working as expected.
> {code:java}
> "parsedquery_toString":"+(((+name_text_gp:olympiqu +name_text_gp:marseil 
> +name_text_gp:maillot) name_text_gp:om))",
>  "parsedquery_toString":"+((name_text_gp:om (+name_text_gp:olympiqu 
> +name_text_gp:marseil +name_text_gp:maillot)))",{code}
> with "qq=maillot om" or "qq=maillot olympique de marseille", I can see the 
> same generated query 
> {code:java}
> "parsedquery_toString":"+((name_text_gp:maillot) (name_text_gp:om))",
>  "parsedquery_toString":"+((name_text_gp:maillot) (name_text_gp:om))",{code}
> I don't understand these generated queries. The first one looks like the 
> synonym expansion is ignored, but the second one shows it is not ignored and 
> only the synonym term is used.
>   
>  When I test the analisys for the field type the synonyms are correctly 
> expanded for both expressions
> {code:java}
> om maillot  
>  maillot om
>  olympique de marseille maillot
>  maillot olympique de marseille{code}
> resulting outputs always include the following terms (obvioulsly not always 
> in the same order)
> {code:java}
> olympiqu om marseil maillot {code}
>  
>  So, i suspect an issue with edismax query parser.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_162) - Build # 1398 - Still Unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1398/
Java: 32bit/jdk1.8.0_162 -client -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.search.join.BlockJoinFacetDistribTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.search.join.BlockJoinFacetDistribTest: 1) Thread[id=11161, 
name=qtp515070-11161, state=TIMED_WAITING, 
group=TGRP-BlockJoinFacetDistribTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.search.join.BlockJoinFacetDistribTest: 
   1) Thread[id=11161, name=qtp515070-11161, state=TIMED_WAITING, 
group=TGRP-BlockJoinFacetDistribTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([FE37B6B8804C4D10]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.search.join.BlockJoinFacetDistribTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=11161, name=qtp515070-11161, state=TIMED_WAITING, 
group=TGRP-BlockJoinFacetDistribTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=11161, name=qtp515070-11161, state=TIMED_WAITING, 
group=TGRP-BlockJoinFacetDistribTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([FE37B6B8804C4D10]:0)


FAILED:  org.apache.lucene.index.TestStressNRT.test

Error Message:
MockDirectoryWrapper: cannot close: there are still 22 open files: {_9f.cfs=1, 
_9p.fdt=1, _9p_Asserting_0.tim=1, _9o.fdt=1, _9n.fdt=1, _9t.fdt=1, 
_9p_Asserting_0.doc=1, _9k_Asserting_0.tim=1, _9n_Asserting_0.tim=1, _9u.cfs=1, 
_9q_Asserting_0.tim=1, _9t_Asserting_0.tim=1, _9k.fdt=1, _9o_Asserting_0.doc=1, 
_9k_Asserting_0.doc=1, _9n_Asserting_0.doc=1, _9s.cfs=1, _9q_Asserting_0.doc=1, 
_9t_Asserting_0.doc=1, _9q.fdt=1, _96.cfs=1, _9o_Asserting_0.tim=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
22 open files: {_9f.cfs=1, _9p.fdt=1, _9p_Asserting_0.tim=1, _9o.fdt=1, 
_9n.fdt=1, 

[jira] [Commented] (SOLR-11968) Multi-words query time synonyms

2018-02-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370899#comment-16370899
 ] 

Steve Rowe commented on SOLR-11968:
---

bq. LUCENE-4065 should probably be closed as won't-fix (I'll comment there in a 
sec).

Maybe not?  Although the {{enablePositionIncrements()}} option was removed from 
StopFilter et al via LUCENE-4963, Robert Muir wrote that the idea in 
LUCENE-4065 may still have merit: 
[https://discuss.elastic.co/t/stop-filter-problem-enablepositionincrements-false-is-not-supported-anymore-as-of-lucene-4-4-as-it-can-create-broken-token-streams/13457/5]

> Multi-words query time synonyms
> ---
>
> Key: SOLR-11968
> URL: https://issues.apache.org/jira/browse/SOLR-11968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers, Schema and Analysis
>Affects Versions: master (8.0), 6.6.2
> Environment: Centos 7.x
>Reporter: Dominique Béjean
>Priority: Major
>
> I am trying multi words query time synonyms with Solr 6.6.2 and 
> SynonymGraphFilterFactory filter as explain in this article
>  
> [https://lucidworks.com/2017/04/18/multi-word-synonyms-solr-adds-query-time-support/]
>   
>  My field type is :
> {code:java}
> 
>      
>        
>                      articles="lang/contractions_fr.txt"/>
>        
>        
>         ignoreCase="true"/>
>        
>      
>      
>        
>                      articles="lang/contractions_fr.txt"/>
>        
>                      ignoreCase="true" expand="true"/>
>        
>         ignoreCase="true"/>
>        
>      
>    {code}
>  
>  synonyms.txt contains the line :
> {code:java}
> om, olympique de marseille{code}
>  
>  stopwords.txt contains the word 
> {code:java}
> de{code}
>  
>  The order of words in my query has an impact on the generated query in 
> edismax
> {code:java}
> q={!edismax qf='name_text_gp' v=$qq}
>  &sow=false
>  &qq=...{code}
> with "qq=om maillot" or "qq=olympique de marseille maillot", I can see the 
> synonyms expansion. It is working as expected.
> {code:java}
> "parsedquery_toString":"+(((+name_text_gp:olympiqu +name_text_gp:marseil 
> +name_text_gp:maillot) name_text_gp:om))",
>  "parsedquery_toString":"+((name_text_gp:om (+name_text_gp:olympiqu 
> +name_text_gp:marseil +name_text_gp:maillot)))",{code}
> with "qq=maillot om" or "qq=maillot olympique de marseille", I can see the 
> same generated query 
> {code:java}
> "parsedquery_toString":"+((name_text_gp:maillot) (name_text_gp:om))",
>  "parsedquery_toString":"+((name_text_gp:maillot) (name_text_gp:om))",{code}
> I don't understand these generated queries. The first one looks like the 
> synonym expansion is ignored, but the second one shows it is not ignored and 
> only the synonym term is used.
>   
>  When I test the analisys for the field type the synonyms are correctly 
> expanded for both expressions
> {code:java}
> om maillot  
>  maillot om
>  olympique de marseille maillot
>  maillot olympique de marseille{code}
> resulting outputs always include the following terms (obvioulsly not always 
> in the same order)
> {code:java}
> olympiqu om marseil maillot {code}
>  
>  So, i suspect an issue with edismax query parser.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11968) Multi-words query time synonyms

2018-02-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370883#comment-16370883
 ] 

Steve Rowe commented on SOLR-11968:
---

bq. I think the root cause is LUCENE-4065. I'll try to make a simple test 
demonstrating this.

Not so - LUCENE-4065 should probably be closed as won't-fix (I'll comment there 
in a sec).

Instead, this looks like the problem described in LUCENE-7848.  I tracked the 
problem down to a bug in Lucene's QueryBuilder, which is dropping tokens in 
side paths with position gaps that are caused by StopFilter.

Below is a test that shows the problem - MockSynonymFilter has synonym "cavy" 
for "guinea pig", and the anonymous analyzer below has "pig" on its 
stopfilter's stoplist.  QueryBuilder produces a query for only "cavy", even 
though the token stream also contains "guinea".

{code:java|title=TestQueryBuilder.java}
  public void testGraphStop() {
Query syn1 = new TermQuery(new Term("field", "guinea"));
Query syn2 = new TermQuery(new Term("field", "cavy"));

BooleanQuery synQuery = new BooleanQuery.Builder()
.add(syn1, BooleanClause.Occur.SHOULD)
.add(syn2, BooleanClause.Occur.SHOULD)
.build();
BooleanQuery expectedGraphQuery = new BooleanQuery.Builder()
.add(synQuery, BooleanClause.Occur.SHOULD)
.build();
QueryBuilder queryBuilder = new QueryBuilder(new Analyzer() {
  @Override
  protected TokenStreamComponents createComponents(String fieldName) {
MockTokenizer tokenizer = new MockTokenizer();
TokenStream stream = new MockSynonymFilter(tokenizer);
stream = new StopFilter(stream, 
CharArraySet.copy(Collections.singleton("pig")));
return new TokenStreamComponents(tokenizer, stream);
  }
});
queryBuilder.setAutoGenerateMultiTermSynonymsPhraseQuery(true);
assertEquals(expectedGraphQuery, queryBuilder.createBooleanQuery("field", 
"guinea pig", BooleanClause.Occur.SHOULD));
  }
}
{code}

> Multi-words query time synonyms
> ---
>
> Key: SOLR-11968
> URL: https://issues.apache.org/jira/browse/SOLR-11968
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers, Schema and Analysis
>Affects Versions: master (8.0), 6.6.2
> Environment: Centos 7.x
>Reporter: Dominique Béjean
>Priority: Major
>
> I am trying multi words query time synonyms with Solr 6.6.2 and 
> SynonymGraphFilterFactory filter as explain in this article
>  
> [https://lucidworks.com/2017/04/18/multi-word-synonyms-solr-adds-query-time-support/]
>   
>  My field type is :
> {code:java}
> 
>      
>        
>                      articles="lang/contractions_fr.txt"/>
>        
>        
>         ignoreCase="true"/>
>        
>      
>      
>        
>                      articles="lang/contractions_fr.txt"/>
>        
>                      ignoreCase="true" expand="true"/>
>        
>         ignoreCase="true"/>
>        
>      
>    {code}
>  
>  synonyms.txt contains the line :
> {code:java}
> om, olympique de marseille{code}
>  
>  stopwords.txt contains the word 
> {code:java}
> de{code}
>  
>  The order of words in my query has an impact on the generated query in 
> edismax
> {code:java}
> q={!edismax qf='name_text_gp' v=$qq}
>  &sow=false
>  &qq=...{code}
> with "qq=om maillot" or "qq=olympique de marseille maillot", I can see the 
> synonyms expansion. It is working as expected.
> {code:java}
> "parsedquery_toString":"+(((+name_text_gp:olympiqu +name_text_gp:marseil 
> +name_text_gp:maillot) name_text_gp:om))",
>  "parsedquery_toString":"+((name_text_gp:om (+name_text_gp:olympiqu 
> +name_text_gp:marseil +name_text_gp:maillot)))",{code}
> with "qq=maillot om" or "qq=maillot olympique de marseille", I can see the 
> same generated query 
> {code:java}
> "parsedquery_toString":"+((name_text_gp:maillot) (name_text_gp:om))",
>  "parsedquery_toString":"+((name_text_gp:maillot) (name_text_gp:om))",{code}
> I don't understand these generated queries. The first one looks like the 
> synonym expansion is ignored, but the second one shows it is not ignored and 
> only the synonym term is used.
>   
>  When I test the analisys for the field type the synonyms are correctly 
> expanded for both expressions
> {code:java}
> om maillot  
>  maillot om
>  olympique de marseille maillot
>  maillot olympique de marseille{code}
> resulting outputs always include the following terms (obvioulsly not always 
> in the same order)
> {code:java}
> olympiqu om marseil maillot {code}
>  
>  So, i suspect an issue with edismax query parser.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.

[JENKINS] Lucene-Solr-repro - Build # 74 - Unstable

2018-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/74/

[...truncated 35 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/430/consoleText

[repro] Revision: d5a01e02687c4f88a2f80ac930e27188f2703385

[repro] Repro line:  ant test  -Dtestcase=TestAuthenticationFramework 
-Dtests.method=testBasics -Dtests.seed=7E5AA61925F3E5F3 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=bg-BG -Dtests.timezone=America/Barbados 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SystemLogListenerTest 
-Dtests.method=test -Dtests.seed=7E5AA61925F3E5F3 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=pt -Dtests.timezone=America/Godthab 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=7E5AA61925F3E5F3 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mk-MK 
-Dtests.timezone=Indian/Reunion -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=7E5AA61925F3E5F3 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mk-MK 
-Dtests.timezone=Indian/Reunion -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=7E5AA61925F3E5F3 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=nl-NL -Dtests.timezone=Asia/Colombo 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ComputePlanActionTest 
-Dtests.method=testNodeWithMultipleReplicasLost -Dtests.seed=7E5AA61925F3E5F3 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-Latn-BA 
-Dtests.timezone=America/Shiprock -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
a9f0272380438df88d29ed7c41572136f999f8db
[repro] git checkout d5a01e02687c4f88a2f80ac930e27188f2703385

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsAutoAddReplicasIntegrationTest
[repro]   SystemLogListenerTest
[repro]   TestAuthenticationFramework
[repro]   ComputePlanActionTest
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=25 
-Dtests.class="*.HdfsAutoAddReplicasIntegrationTest|*.SystemLogListenerTest|*.TestAuthenticationFramework|*.ComputePlanActionTest|*.TestTriggerIntegration"
 -Dtests.showOutput=onerror -Dtests.seed=7E5AA61925F3E5F3 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=nl-NL -Dtests.timezone=Asia/Colombo 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 17417 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.TestAuthenticationFramework
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.ComputePlanActionTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.SystemLogListenerTest
[repro] git checkout a9f0272380438df88d29ed7c41572136f999f8db

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 431 - Still unstable

2018-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/431/

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyFieldFacetExtrasCloudTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.analytics.legacy.facet.LegacyFieldFacetExtrasCloudTest: 1) 
Thread[id=754, name=qtp204398197-754, state=TIMED_WAITING, 
group=TGRP-LegacyFieldFacetExtrasCloudTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at 
org.apache.solr.analytics.legacy.facet.LegacyFieldFacetExtrasCloudTest: 
   1) Thread[id=754, name=qtp204398197-754, state=TIMED_WAITING, 
group=TGRP-LegacyFieldFacetExtrasCloudTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([A1827EC393726B9]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyFieldFacetExtrasCloudTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=754, name=qtp204398197-754, state=TIMED_WAITING, 
group=TGRP-LegacyFieldFacetExtrasCloudTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=754, name=qtp204398197-754, state=TIMED_WAITING, 
group=TGRP-LegacyFieldFacetExtrasCloudTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([A1827EC393726B9]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.ltr.store.rest.TestModelManager

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.ltr.store.rest.TestModelManager: 1) Thread[id=156, 
name=qtp295539100-156, state=TIMED_WAITING, group=TGRP-TestModelManager]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
  

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7183 - Still unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7183/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

11 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestBagOfPositions

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestBagOfPositions_24C98C0195F82C2A-001\bagofpositions-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestBagOfPositions_24C98C0195F82C2A-001\bagofpositions-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestBagOfPositions_24C98C0195F82C2A-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestBagOfPositions_24C98C0195F82C2A-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestBagOfPositions_24C98C0195F82C2A-001\bagofpositions-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestBagOfPositions_24C98C0195F82C2A-001\bagofpositions-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestBagOfPositions_24C98C0195F82C2A-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestBagOfPositions_24C98C0195F82C2A-001

at __randomizedtesting.SeedInfo.seed([24C98C0195F82C2A]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyFieldFacetCloudTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.legacy.facet.LegacyFieldFacetCloudTest_F5E28BD88C3B789A-001\tempDir-001\zookeeper\server1\data\version-2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.legacy.facet.LegacyFieldFacetCloudTest_F5E28BD88C3B789A-001\tempDir-001\zookeeper\server1\data\version-2

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.legacy.facet.LegacyFieldFacetCloudTest_F5E28BD88C3B789A-001\tempDir-001\zookeeper\server1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.legacy.facet.LegacyFieldFacetCloudTest_F5E28BD88C3B789A-001\tempDir-001\zookeeper\server1\data

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.legacy.facet.LegacyFieldFacetCloudTest_F5E28BD88C3B789A-001\tempDir-001\zookeeper\server1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.legacy.facet.LegacyFieldFacetCloudTest_F5E28BD88C3B789A-001\tempDir-001\zookeeper\server1

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.legacy.facet.LegacyFieldFacetCloudTest_F5E28BD88C3B789A-001\tempDir-001\zookeeper:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J0\

[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-02-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370869#comment-16370869
 ] 

Steve Rowe commented on LUCENE-8106:


Now that the repro stuff is working on the Policeman Jenkins 
{{Lucene-Solr-master-Linux}} project, I also added the script ^^ as a build 
step to the {{Lucene-Solr-7.x-Linux}} project.

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12009) Solr OOM exception

2018-02-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370868#comment-16370868
 ] 

Bharat Viswanadham commented on SOLR-12009:
---

Thank You [~varunthacker] sent an email to solr mailing list.

> Solr OOM exception
> --
>
> Key: SOLR-12009
> URL: https://issues.apache.org/jira/browse/SOLR-12009
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: Screen Shot 2018-02-20 at 6.50.53 PM.png, Screen Shot 
> 2018-02-20 at 6.51.04 PM.png
>
>
> Attached the screenshots of the objects, which are using high memory, and 
> causing OOM.
>  
> {code:java}
> at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnumFrame.load(Lorg/apache/lucene/util/BytesRef;)V
>  (IntersectTermsEnumFrame.java:195)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.pushFrame(I)Lorg/apache/lucene/codecs/blocktree/IntersectTermsEnumFrame;
>  (IntersectTermsEnum.java:211)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum._next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:665)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:500)
>  at 
> org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (ExitableDirectoryReader.java:185)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.collectTerms(Lorg/apache/lucene/index/LeafReaderContext;Lorg/apache/lucene/index/TermsEnum;Ljava/util/List;)Z
>  (MultiTermQueryConstantScoreWrapper.java:120)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/MultiTermQueryConstantScoreWrapper$WeightOrDocIdSet;
>  (MultiTermQueryConstantScoreWrapper.java:147)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/BulkScorer;
>  (MultiTermQueryConstantScoreWrapper.java:194)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:666)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:473)
>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(Lorg/apache/solr/search/QueryResult;Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;Lorg/apache/solr/search/QueryCommand;Lorg/apache/solr/search/DelegatingCollector;)V
>  (SolrIndexSearcher.java:242)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1803)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1620)
>  at 
> org.apache.solr.search.SolrIndexSearcher.search(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)Lorg/apache/solr/search/QueryResult;
>  (SolrIndexSearcher.java:617)
>  at 
> org.apache.solr.handler.component.QueryComponent.process(Lorg/apache/solr/handler/component/ResponseBuilder;)V
>  (QueryComponent.java:531)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (RequestHandlerBase.java:153)
>  at 
> org.apache.solr.core.SolrCore.execute(Lorg/apache/solr/request/SolrRequestHandler;Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SolrCore.java:2213)
>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(Lorg/apache/solr/response/SolrQueryResponse;)V
>  (HttpSolrCall.java:654)
>  at 
> org.apache.solr.servlet.HttpSolrCall.call()Lorg/apache/solr/servlet/SolrDispatchFilter$Action;
>  (HttpSolrCall.java:460)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;Z)V
>  (SolrDispatchFilter.java:303)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;)V
>  (SolrDispatchFilter.java:254)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V
> 

[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-02-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370863#comment-16370863
 ] 

Steve Rowe commented on LUCENE-8106:


Looks like Policeman {{Lucene-Solr-master-Linux}} is working now - see e.g. 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21501/consoleText].  
Here's the final script; I had to add Jenkins's {{ant}} to the PATH to get it 
running:

{noformat}
set -x # Log commands

TMPFILE=`mktemp`
trap "rm -f $TMPFILE" EXIT   # Delete the temp file on SIGEXIT

curl -o $TMPFILE 
https://jenkins.thetaphi.de/job/$JOB_NAME/$BUILD_NUMBER/consoleText

if grep --quiet 'reproduce with' $TMPFILE ; then

# Preserve original build output
mv lucene/build lucene/build.orig
mv solr/build solr/build.orig

PYTHON32_EXE=`grep "^[[:space:]]*python32\.exe[[:space:]]*=" 
~/lucene.build.properties | cut -d'=' -f2`
[ -z $PYTHON32_EXE ] && PYTHON32_EXE=python3
GIT_EXE=`grep "^[[:space:]]*git\.exe[[:space:]]*=" 
~/lucene.build.properties | cut -d'=' -f2`
[ -n $GIT_EXE ] && export PATH=$GIT_EXE:$PATH
export 
ANT_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.4
export PATH=$ANT_HOME/bin:$PATH
$PYTHON32_EXE -u dev-tools/scripts/reproduceJenkinsFailures.py --no-fetch 
file://$TMPFILE

# Preserve repro build output
mv lucene/build lucene/build.repro
mv solr/build solr/build.repro

# Restore original build output
mv lucene/build.orig lucene/build
mv solr/build.orig solr/build
fi
{noformat}

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12009) Solr OOM exception

2018-02-20 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370861#comment-16370861
 ] 

Varun Thacker commented on SOLR-12009:
--

Hi Bharat,

 

Please post the complete stack trace as a user-mailing question. We'd be happy 
to help you there

> Solr OOM exception
> --
>
> Key: SOLR-12009
> URL: https://issues.apache.org/jira/browse/SOLR-12009
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: Screen Shot 2018-02-20 at 6.50.53 PM.png, Screen Shot 
> 2018-02-20 at 6.51.04 PM.png
>
>
> Attached the screenshots of the objects, which are using high memory, and 
> causing OOM.
>  
> {code:java}
> at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnumFrame.load(Lorg/apache/lucene/util/BytesRef;)V
>  (IntersectTermsEnumFrame.java:195)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.pushFrame(I)Lorg/apache/lucene/codecs/blocktree/IntersectTermsEnumFrame;
>  (IntersectTermsEnum.java:211)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum._next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:665)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:500)
>  at 
> org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (ExitableDirectoryReader.java:185)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.collectTerms(Lorg/apache/lucene/index/LeafReaderContext;Lorg/apache/lucene/index/TermsEnum;Ljava/util/List;)Z
>  (MultiTermQueryConstantScoreWrapper.java:120)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/MultiTermQueryConstantScoreWrapper$WeightOrDocIdSet;
>  (MultiTermQueryConstantScoreWrapper.java:147)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/BulkScorer;
>  (MultiTermQueryConstantScoreWrapper.java:194)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:666)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:473)
>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(Lorg/apache/solr/search/QueryResult;Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;Lorg/apache/solr/search/QueryCommand;Lorg/apache/solr/search/DelegatingCollector;)V
>  (SolrIndexSearcher.java:242)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1803)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1620)
>  at 
> org.apache.solr.search.SolrIndexSearcher.search(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)Lorg/apache/solr/search/QueryResult;
>  (SolrIndexSearcher.java:617)
>  at 
> org.apache.solr.handler.component.QueryComponent.process(Lorg/apache/solr/handler/component/ResponseBuilder;)V
>  (QueryComponent.java:531)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (RequestHandlerBase.java:153)
>  at 
> org.apache.solr.core.SolrCore.execute(Lorg/apache/solr/request/SolrRequestHandler;Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SolrCore.java:2213)
>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(Lorg/apache/solr/response/SolrQueryResponse;)V
>  (HttpSolrCall.java:654)
>  at 
> org.apache.solr.servlet.HttpSolrCall.call()Lorg/apache/solr/servlet/SolrDispatchFilter$Action;
>  (HttpSolrCall.java:460)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;Z)V
>  (SolrDispatchFilter.java:303)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;)V
>  (SolrDispatchFilter.java:254)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(Ljavax/servlet/ServletReq

[jira] [Commented] (SOLR-12009) Solr OOM exception

2018-02-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370858#comment-16370858
 ] 

Bharat Viswanadham commented on SOLR-12009:
---

Hi [~varunthacker]

That is only 2nd part which is taking 11% of the memory, that one can be 
reduced through docValues for fields.  but there is one more, memory 
accumulated in one instance of IntersectTermEnumsFrame[]. So, is this also 
related or is it completely a different issue?

> Solr OOM exception
> --
>
> Key: SOLR-12009
> URL: https://issues.apache.org/jira/browse/SOLR-12009
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: Screen Shot 2018-02-20 at 6.50.53 PM.png, Screen Shot 
> 2018-02-20 at 6.51.04 PM.png
>
>
> Attached the screenshots of the objects, which are using high memory, and 
> causing OOM.
>  
> {code:java}
> at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnumFrame.load(Lorg/apache/lucene/util/BytesRef;)V
>  (IntersectTermsEnumFrame.java:195)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.pushFrame(I)Lorg/apache/lucene/codecs/blocktree/IntersectTermsEnumFrame;
>  (IntersectTermsEnum.java:211)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum._next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:665)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:500)
>  at 
> org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (ExitableDirectoryReader.java:185)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.collectTerms(Lorg/apache/lucene/index/LeafReaderContext;Lorg/apache/lucene/index/TermsEnum;Ljava/util/List;)Z
>  (MultiTermQueryConstantScoreWrapper.java:120)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/MultiTermQueryConstantScoreWrapper$WeightOrDocIdSet;
>  (MultiTermQueryConstantScoreWrapper.java:147)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/BulkScorer;
>  (MultiTermQueryConstantScoreWrapper.java:194)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:666)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:473)
>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(Lorg/apache/solr/search/QueryResult;Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;Lorg/apache/solr/search/QueryCommand;Lorg/apache/solr/search/DelegatingCollector;)V
>  (SolrIndexSearcher.java:242)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1803)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1620)
>  at 
> org.apache.solr.search.SolrIndexSearcher.search(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)Lorg/apache/solr/search/QueryResult;
>  (SolrIndexSearcher.java:617)
>  at 
> org.apache.solr.handler.component.QueryComponent.process(Lorg/apache/solr/handler/component/ResponseBuilder;)V
>  (QueryComponent.java:531)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (RequestHandlerBase.java:153)
>  at 
> org.apache.solr.core.SolrCore.execute(Lorg/apache/solr/request/SolrRequestHandler;Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SolrCore.java:2213)
>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(Lorg/apache/solr/response/SolrQueryResponse;)V
>  (HttpSolrCall.java:654)
>  at 
> org.apache.solr.servlet.HttpSolrCall.call()Lorg/apache/solr/servlet/SolrDispatchFilter$Action;
>  (HttpSolrCall.java:460)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;Z)V
>  (SolrDispatchFilter.java:303)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax

[jira] [Resolved] (SOLR-12009) Solr OOM exception

2018-02-20 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-12009.
--
Resolution: Not A Problem

Hi Bharat,

 

Please raise these questions on the solr-user mailing list first. 

 

An OOM doesn't mean it's a bug in Solr. You need to enable docValues on fields 
that you are faceting/sorting/function/collapse/grouping queries on . That will 
ensure the WeakHashMap in the screenshot is not built on the heap but as part 
of the index data structure which is memory mapped. There are ton's on resource 
online about doc-values and if you have further questions the mailing list 
would be the best way to ask for help

> Solr OOM exception
> --
>
> Key: SOLR-12009
> URL: https://issues.apache.org/jira/browse/SOLR-12009
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: Screen Shot 2018-02-20 at 6.50.53 PM.png, Screen Shot 
> 2018-02-20 at 6.51.04 PM.png
>
>
> Attached the screenshots of the objects, which are using high memory, and 
> causing OOM.
>  
> {code:java}
> at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnumFrame.load(Lorg/apache/lucene/util/BytesRef;)V
>  (IntersectTermsEnumFrame.java:195)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.pushFrame(I)Lorg/apache/lucene/codecs/blocktree/IntersectTermsEnumFrame;
>  (IntersectTermsEnum.java:211)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum._next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:665)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:500)
>  at 
> org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (ExitableDirectoryReader.java:185)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.collectTerms(Lorg/apache/lucene/index/LeafReaderContext;Lorg/apache/lucene/index/TermsEnum;Ljava/util/List;)Z
>  (MultiTermQueryConstantScoreWrapper.java:120)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/MultiTermQueryConstantScoreWrapper$WeightOrDocIdSet;
>  (MultiTermQueryConstantScoreWrapper.java:147)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/BulkScorer;
>  (MultiTermQueryConstantScoreWrapper.java:194)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:666)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:473)
>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(Lorg/apache/solr/search/QueryResult;Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;Lorg/apache/solr/search/QueryCommand;Lorg/apache/solr/search/DelegatingCollector;)V
>  (SolrIndexSearcher.java:242)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1803)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1620)
>  at 
> org.apache.solr.search.SolrIndexSearcher.search(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)Lorg/apache/solr/search/QueryResult;
>  (SolrIndexSearcher.java:617)
>  at 
> org.apache.solr.handler.component.QueryComponent.process(Lorg/apache/solr/handler/component/ResponseBuilder;)V
>  (QueryComponent.java:531)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (RequestHandlerBase.java:153)
>  at 
> org.apache.solr.core.SolrCore.execute(Lorg/apache/solr/request/SolrRequestHandler;Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SolrCore.java:2213)
>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(Lorg/apache/solr/response/SolrQueryResponse;)V
>  (HttpSolrCall.java:654)
>  at 
> org.apache.solr.servlet.HttpSolrCall.call()Lorg/apache/solr/servlet/SolrDispatchFilter$Action;
>  (HttpSolrCall.java:460)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletR

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21501 - Still unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21501/
Java: 32bit/jdk1.8.0_162 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=3775, name=jetty-launcher-639-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=3775, name=jetty-launcher-639-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)
at __randomizedtesting.SeedInfo.seed([682DF1CB68340321]:0)


FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testSearchRate

Error Message:
The trigger did not fire at all

Stack Trace:
java.lang.AssertionError: The trigger did not fire at all
at 
__randomizedtesting.SeedInfo.seed([682DF1CB68340321:3565EF42A7F2A56E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testSearchRate(TriggerIntegrationTest.java:1448)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
co

[jira] [Updated] (SOLR-12009) Solr OOM exception

2018-02-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated SOLR-12009:
--
Attachment: Screen Shot 2018-02-20 at 6.51.04 PM.png
Screen Shot 2018-02-20 at 6.50.53 PM.png

> Solr OOM exception
> --
>
> Key: SOLR-12009
> URL: https://issues.apache.org/jira/browse/SOLR-12009
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: Screen Shot 2018-02-20 at 6.50.53 PM.png, Screen Shot 
> 2018-02-20 at 6.51.04 PM.png
>
>
> qtp984849465-237413
> {code:java}
> at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnumFrame.load(Lorg/apache/lucene/util/BytesRef;)V
>  (IntersectTermsEnumFrame.java:195)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.pushFrame(I)Lorg/apache/lucene/codecs/blocktree/IntersectTermsEnumFrame;
>  (IntersectTermsEnum.java:211)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum._next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:665)
>  at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (IntersectTermsEnum.java:500)
>  at 
> org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
>  (ExitableDirectoryReader.java:185)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.collectTerms(Lorg/apache/lucene/index/LeafReaderContext;Lorg/apache/lucene/index/TermsEnum;Ljava/util/List;)Z
>  (MultiTermQueryConstantScoreWrapper.java:120)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/MultiTermQueryConstantScoreWrapper$WeightOrDocIdSet;
>  (MultiTermQueryConstantScoreWrapper.java:147)
>  at 
> org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/BulkScorer;
>  (MultiTermQueryConstantScoreWrapper.java:194)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:666)
>  at 
> org.apache.lucene.search.IndexSearcher.search(Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;)V
>  (IndexSearcher.java:473)
>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(Lorg/apache/solr/search/QueryResult;Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;Lorg/apache/solr/search/QueryCommand;Lorg/apache/solr/search/DelegatingCollector;)V
>  (SolrIndexSearcher.java:242)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1803)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
>  (SolrIndexSearcher.java:1620)
>  at 
> org.apache.solr.search.SolrIndexSearcher.search(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)Lorg/apache/solr/search/QueryResult;
>  (SolrIndexSearcher.java:617)
>  at 
> org.apache.solr.handler.component.QueryComponent.process(Lorg/apache/solr/handler/component/ResponseBuilder;)V
>  (QueryComponent.java:531)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (RequestHandlerBase.java:153)
>  at 
> org.apache.solr.core.SolrCore.execute(Lorg/apache/solr/request/SolrRequestHandler;Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
>  (SolrCore.java:2213)
>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(Lorg/apache/solr/response/SolrQueryResponse;)V
>  (HttpSolrCall.java:654)
>  at 
> org.apache.solr.servlet.HttpSolrCall.call()Lorg/apache/solr/servlet/SolrDispatchFilter$Action;
>  (HttpSolrCall.java:460)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;Z)V
>  (SolrDispatchFilter.java:303)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;)V
>  (SolrDispatchFilter.java:254)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V
>  (ServletHandler.java:1668)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.do

[jira] [Updated] (SOLR-12009) Solr OOM exception

2018-02-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated SOLR-12009:
--
Description: 
Attached the screenshots of the objects, which are using high memory, and 
causing OOM.

 
{code:java}
at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnumFrame.load(Lorg/apache/lucene/util/BytesRef;)V
 (IntersectTermsEnumFrame.java:195)
 at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum.pushFrame(I)Lorg/apache/lucene/codecs/blocktree/IntersectTermsEnumFrame;
 (IntersectTermsEnum.java:211)
 at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum._next()Lorg/apache/lucene/util/BytesRef;
 (IntersectTermsEnum.java:665)
 at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
 (IntersectTermsEnum.java:500)
 at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
 (ExitableDirectoryReader.java:185)
 at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.collectTerms(Lorg/apache/lucene/index/LeafReaderContext;Lorg/apache/lucene/index/TermsEnum;Ljava/util/List;)Z
 (MultiTermQueryConstantScoreWrapper.java:120)
 at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/MultiTermQueryConstantScoreWrapper$WeightOrDocIdSet;
 (MultiTermQueryConstantScoreWrapper.java:147)
 at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/BulkScorer;
 (MultiTermQueryConstantScoreWrapper.java:194)
 at 
org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
 (IndexSearcher.java:666)
 at 
org.apache.lucene.search.IndexSearcher.search(Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;)V
 (IndexSearcher.java:473)
 at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(Lorg/apache/solr/search/QueryResult;Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;Lorg/apache/solr/search/QueryCommand;Lorg/apache/solr/search/DelegatingCollector;)V
 (SolrIndexSearcher.java:242)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
 (SolrIndexSearcher.java:1803)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
 (SolrIndexSearcher.java:1620)
 at 
org.apache.solr.search.SolrIndexSearcher.search(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)Lorg/apache/solr/search/QueryResult;
 (SolrIndexSearcher.java:617)
 at 
org.apache.solr.handler.component.QueryComponent.process(Lorg/apache/solr/handler/component/ResponseBuilder;)V
 (QueryComponent.java:531)
 at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (SearchHandler.java:295)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (RequestHandlerBase.java:153)
 at 
org.apache.solr.core.SolrCore.execute(Lorg/apache/solr/request/SolrRequestHandler;Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (SolrCore.java:2213)
 at 
org.apache.solr.servlet.HttpSolrCall.execute(Lorg/apache/solr/response/SolrQueryResponse;)V
 (HttpSolrCall.java:654)
 at 
org.apache.solr.servlet.HttpSolrCall.call()Lorg/apache/solr/servlet/SolrDispatchFilter$Action;
 (HttpSolrCall.java:460)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;Z)V
 (SolrDispatchFilter.java:303)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;)V
 (SolrDispatchFilter.java:254)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V
 (ServletHandler.java:1668)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 (ServletHandler.java:581)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 (ScopedHandler.java:143)
 at 
org.eclipse.jetty.security.SecurityHandler.handle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 (SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.s

[jira] [Updated] (SOLR-12009) Solr OOM exception

2018-02-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated SOLR-12009:
--
Description: 
qtp984849465-237413
{code:java}
at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnumFrame.load(Lorg/apache/lucene/util/BytesRef;)V
 (IntersectTermsEnumFrame.java:195)
 at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum.pushFrame(I)Lorg/apache/lucene/codecs/blocktree/IntersectTermsEnumFrame;
 (IntersectTermsEnum.java:211)
 at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum._next()Lorg/apache/lucene/util/BytesRef;
 (IntersectTermsEnum.java:665)
 at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
 (IntersectTermsEnum.java:500)
 at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
 (ExitableDirectoryReader.java:185)
 at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.collectTerms(Lorg/apache/lucene/index/LeafReaderContext;Lorg/apache/lucene/index/TermsEnum;Ljava/util/List;)Z
 (MultiTermQueryConstantScoreWrapper.java:120)
 at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/MultiTermQueryConstantScoreWrapper$WeightOrDocIdSet;
 (MultiTermQueryConstantScoreWrapper.java:147)
 at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/BulkScorer;
 (MultiTermQueryConstantScoreWrapper.java:194)
 at 
org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
 (IndexSearcher.java:666)
 at 
org.apache.lucene.search.IndexSearcher.search(Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;)V
 (IndexSearcher.java:473)
 at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(Lorg/apache/solr/search/QueryResult;Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;Lorg/apache/solr/search/QueryCommand;Lorg/apache/solr/search/DelegatingCollector;)V
 (SolrIndexSearcher.java:242)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
 (SolrIndexSearcher.java:1803)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
 (SolrIndexSearcher.java:1620)
 at 
org.apache.solr.search.SolrIndexSearcher.search(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)Lorg/apache/solr/search/QueryResult;
 (SolrIndexSearcher.java:617)
 at 
org.apache.solr.handler.component.QueryComponent.process(Lorg/apache/solr/handler/component/ResponseBuilder;)V
 (QueryComponent.java:531)
 at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (SearchHandler.java:295)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (RequestHandlerBase.java:153)
 at 
org.apache.solr.core.SolrCore.execute(Lorg/apache/solr/request/SolrRequestHandler;Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (SolrCore.java:2213)
 at 
org.apache.solr.servlet.HttpSolrCall.execute(Lorg/apache/solr/response/SolrQueryResponse;)V
 (HttpSolrCall.java:654)
 at 
org.apache.solr.servlet.HttpSolrCall.call()Lorg/apache/solr/servlet/SolrDispatchFilter$Action;
 (HttpSolrCall.java:460)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;Z)V
 (SolrDispatchFilter.java:303)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;)V
 (SolrDispatchFilter.java:254)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V
 (ServletHandler.java:1668)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 (ServletHandler.java:581)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 (ScopedHandler.java:143)
 at 
org.eclipse.jetty.security.SecurityHandler.handle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 (SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(Ljava/lang/String;Lorg/eclipse/jetty/ser

[jira] [Created] (SOLR-12009) Solr OOM exception

2018-02-20 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created SOLR-12009:
-

 Summary: Solr OOM exception
 Key: SOLR-12009
 URL: https://issues.apache.org/jira/browse/SOLR-12009
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Bharat Viswanadham


qtp984849465-237413
  at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnumFrame.load(Lorg/apache/lucene/util/BytesRef;)V
 (IntersectTermsEnumFrame.java:195)
  at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum.pushFrame(I)Lorg/apache/lucene/codecs/blocktree/IntersectTermsEnumFrame;
 (IntersectTermsEnum.java:211)
  at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum._next()Lorg/apache/lucene/util/BytesRef;
 (IntersectTermsEnum.java:665)
  at 
org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
 (IntersectTermsEnum.java:500)
  at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.next()Lorg/apache/lucene/util/BytesRef;
 (ExitableDirectoryReader.java:185)
  at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.collectTerms(Lorg/apache/lucene/index/LeafReaderContext;Lorg/apache/lucene/index/TermsEnum;Ljava/util/List;)Z
 (MultiTermQueryConstantScoreWrapper.java:120)
  at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/MultiTermQueryConstantScoreWrapper$WeightOrDocIdSet;
 (MultiTermQueryConstantScoreWrapper.java:147)
  at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(Lorg/apache/lucene/index/LeafReaderContext;)Lorg/apache/lucene/search/BulkScorer;
 (MultiTermQueryConstantScoreWrapper.java:194)
  at 
org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
 (IndexSearcher.java:666)
  at 
org.apache.lucene.search.IndexSearcher.search(Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;)V
 (IndexSearcher.java:473)
  at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(Lorg/apache/solr/search/QueryResult;Lorg/apache/lucene/search/Query;Lorg/apache/lucene/search/Collector;Lorg/apache/solr/search/QueryCommand;Lorg/apache/solr/search/DelegatingCollector;)V
 (SolrIndexSearcher.java:242)
  at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
 (SolrIndexSearcher.java:1803)
  at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)V
 (SolrIndexSearcher.java:1620)
  at 
org.apache.solr.search.SolrIndexSearcher.search(Lorg/apache/solr/search/QueryResult;Lorg/apache/solr/search/QueryCommand;)Lorg/apache/solr/search/QueryResult;
 (SolrIndexSearcher.java:617)
  at 
org.apache.solr.handler.component.QueryComponent.process(Lorg/apache/solr/handler/component/ResponseBuilder;)V
 (QueryComponent.java:531)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (SearchHandler.java:295)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (RequestHandlerBase.java:153)
  at 
org.apache.solr.core.SolrCore.execute(Lorg/apache/solr/request/SolrRequestHandler;Lorg/apache/solr/request/SolrQueryRequest;Lorg/apache/solr/response/SolrQueryResponse;)V
 (SolrCore.java:2213)
  at 
org.apache.solr.servlet.HttpSolrCall.execute(Lorg/apache/solr/response/SolrQueryResponse;)V
 (HttpSolrCall.java:654)
  at 
org.apache.solr.servlet.HttpSolrCall.call()Lorg/apache/solr/servlet/SolrDispatchFilter$Action;
 (HttpSolrCall.java:460)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;Z)V
 (SolrDispatchFilter.java:303)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;)V
 (SolrDispatchFilter.java:254)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V
 (ServletHandler.java:1668)
  at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 (ServletHandler.java:581)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 (ScopedHandler.java:143)
  at 
org.eclipse.jetty.security.SecurityHandler.handle(Ljava/lang/String;Lorg/eclipse/jetty/server/Request;Ljavax/servlet/http/HttpS

[jira] [Commented] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-20 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370840#comment-16370840
 ] 

Varun Thacker commented on SOLR-12006:
--

Git bot has been silent , so here are the commits

 

master : 
[https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a9f0272380438df88d29ed7c41572136f999f8db]

 

branch_7x: 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a9f0272380438df88d29ed7c41572136f999f8db

> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12006.patch, SOLR-12006.patch, SOLR-12006.patch
>
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-20 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12006:
-
Attachment: SOLR-12006.patch

> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12006.patch, SOLR-12006.patch, SOLR-12006.patch
>
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-20 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12006:
-
Attachment: SOLR-12006.patch

> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12006.patch, SOLR-12006.patch
>
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-20 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370812#comment-16370812
 ] 

Varun Thacker commented on SOLR-12006:
--

The approach taken in this patch will break back-compat if a user was expecting 
"text_general" to be multi-valued but now isn't.

 

It doesn't break if you were using dynamic fields but if you were defining a 
new field and using fieldType=text_general ( like "add-schema-fields" does in 
our solrconfig ) then that will break.

 

The approach we could take is explicitly making the dynamic field "_t" 
multiValued=false 

 

I'll post a patch soon with that approach

> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12006.patch
>
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-20 Thread Koji Sekiguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Sekiguchi reopened SOLR-11795:
---

Reopening this. We're still working on this.

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795-8.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11960) Add collection level properties

2018-02-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-11960:
-
Attachment: SOLR-11960.patch

> Add collection level properties
> ---
>
> Key: SOLR-11960
> URL: https://issues.apache.org/jira/browse/SOLR-11960
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Peter Rusko
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-11960.patch, SOLR-11960.patch, SOLR-11960.patch
>
>
> Solr has cluster properties, but no easy and extendable way of defining 
> properties that affect a single collection. Collection properties could be 
> stored in a single zookeeper node per collection, making it possible to 
> trigger zookeeper watchers for only those Solr nodes that have cores of that 
> collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11960) Add collection level properties

2018-02-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370797#comment-16370797
 ] 

Tomás Fernández Löbbe commented on SOLR-11960:
--

Thanks [~prusko], patch looks great. I modified 
{{CollectionPropsTest.testReadWrite}} to check immediately, since we are 
getting the value directly from ZooKeeper the change should be immediate, there 
is no need to wait. I also added a test, 
{{CollectionPropsTest.testReadWriteCached}} that adds a watcher, so that we do 
read the cached state. For that case we do need to wait until the value is 
asynchronously set.
I’m going to upload a patch with my latest changes and commit shortly. Can you 
update the docs for this new command?

> Add collection level properties
> ---
>
> Key: SOLR-11960
> URL: https://issues.apache.org/jira/browse/SOLR-11960
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Peter Rusko
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-11960.patch, SOLR-11960.patch
>
>
> Solr has cluster properties, but no easy and extendable way of defining 
> properties that affect a single collection. Collection properties could be 
> stored in a single zookeeper node per collection, making it possible to 
> trigger zookeeper watchers for only those Solr nodes that have cores of that 
> collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 454 - Still unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/454/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=19728, name=jetty-launcher-3961-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)   
 2) Thread[id=19809, 
name=jetty-launcher-3961-thread-2-SendThread(127.0.0.1:58653), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)   
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)   
 3) Thread[id=19810, name=jetty-launcher-3961-thread-2-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=19728, name=jetty-launcher-3961-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)
   2) Thread[id=19809, 
name=jetty-launcher-3961-thread-2-SendThread(127.0.0

[jira] [Commented] (SOLR-11978) include SortableTextField in _default and sample_techproducts configsets

2018-02-20 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370775#comment-16370775
 ] 

Varun Thacker commented on SOLR-11978:
--

Hi Hoss,

 

At the risk of being more verbose , what do you think about changing the name 
to "text_general_sort" instead of "text_gen_sort" .  It will then be closer to 
"text_general" so that users find it easy to discover it.

 

There is a typo on line 294 as well "generaly" -> "generally"

 

 

> include SortableTextField in _default and sample_techproducts configsets
> 
>
> Key: SOLR-11978
> URL: https://issues.apache.org/jira/browse/SOLR-11978
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11978.patch
>
>
> since SortableTextField defaults to docValues="true" it has additional on 
> disk overhead compared to TextField that means I don't think we should 
> completley replace all suggested uses of TextField at this point – but it 
> would still be good to include it in our configsets similar to the way we 
> include declarations for a variety of text analysis options.
> I also think several "explicit" fields in the techproducts schema would 
> benefit from using this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+43) - Build # 1397 - Still Unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1397/
Java: 64bit/jdk-10-ea+43 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([5F161D6C652347B:8DA55E0C68AE5983]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([5F161D6C652347B:680DC52B7C1ACB7C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.jun

[jira] [Commented] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370755#comment-16370755
 ] 

Andrey Kudryavtsev commented on SOLR-9510:
--

{quote}

exclusion might end-up with MatchNoDocs.

{quote}

 

Didn't get this part.

{{testDomainFilterExclusionsInFilters}} is green for me even when '{{"fq", 
"type_s:book"'}} line is commented because 
{{SolrIndexSearcher#getDocSet(List queries) in }}
{code:java}
// recompute the base domain
fcontext.base = fcontext.searcher.getDocSet(qlist);{code}
 will return all live docs in case of empty {{queries}} list.

 

But in general this patch is too good to not break something unexpected.

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 958 - Still Failing

2018-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/958/

No tests ran.

Build Log:
[...truncated 28738 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (34.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.2 MB in 0.03 sec (1151.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.2 MB in 0.07 sec (1014.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.7 MB in 0.09 sec (939.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6243 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6243 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6243 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6243 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (241.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.6 MB in 0.08 sec (695.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 151.0 MB in 0.42 sec (356.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 152.0 MB in 0.20 sec (757.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] *** [WA

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1690 - Still unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1690/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=5200, name=jetty-launcher-1139-thread-1-EventThread, state=WAITING, 
group=TGRP-TestSolrCloudWithSecureImpersonation] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
2) Thread[id=5202, name=jetty-launcher-1139-thread-2-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
3) Thread[id=5201, 
name=jetty-launcher-1139-thread-2-SendThread(127.0.0.1:64910), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)   
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)   
 4) Thread[id=5199, 
name=jetty-launcher-1139-thread-1-SendThread(127.0.0.1:64910), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)   
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=5200, name=jetty-launcher-1139-thread-1-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
   2) Thread[id=5202, name=jetty-launcher-1139-thread-2-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
   3) Thread[id=5201, 
name=jetty-launcher-1139-thread-2-SendThread(127.0.0.1:64910), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)
   4) Thread[id=5199, 
name=jetty-launcher-1139-thread-1-SendThread(127.0.0.1:64910), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)
at __randomizedtesting.SeedInfo.seed([2E958DD81938D690]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=5201, name=jetty-launcher-1139-thread-2-SendThread(127.0.0.1:64910), 
state=TIMED_WAITING, group=TGRP-TestSo

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21500 - Still Failing!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21500/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 13459 lines...]
   [junit4] Suite: org.apache.solr.cloud.api.collections.ShardSplitTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.ShardSplitTest_9AA8EB24C135BB0F-001/init-core-data-001
   [junit4]   2> 1484840 WARN  
(SUITE-ShardSplitTest-seed#[9AA8EB24C135BB0F]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1
   [junit4]   2> 1484840 INFO  
(SUITE-ShardSplitTest-seed#[9AA8EB24C135BB0F]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1484840 INFO  
(SUITE-ShardSplitTest-seed#[9AA8EB24C135BB0F]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl="https://issues.apache.org/jira/browse/SOLR-5776";)
   [junit4]   2> 1484840 INFO  
(SUITE-ShardSplitTest-seed#[9AA8EB24C135BB0F]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 1484840 INFO  
(SUITE-ShardSplitTest-seed#[9AA8EB24C135BB0F]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1484841 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1484841 INFO  (Thread-831) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1484841 INFO  (Thread-831) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1484846 ERROR (Thread-831) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1484941 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.ZkTestServer start zk server on port:35627
   [junit4]   2> 1484943 INFO  (zkConnectionManagerCallback-764-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1484945 INFO  (zkConnectionManagerCallback-766-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1484946 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 1484947 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 1484948 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 1484948 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 1484948 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 1484949 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 1484949 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 1484950 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 1484951 INFO  
(TEST-ShardSplitTest.testSplitWithChaosMonkey-seed#[9AA8EB24C135BB0F]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucen

[jira] [Commented] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370630#comment-16370630
 ] 

Mikhail Khludnev commented on SOLR-9510:


Here we go, current patch: 
* just adds {{filters}} into {{\{!parent}}, and 
* brand new {{\{!filters param=$chq}} mind the singular, btw, shouldn't this 
parameter is named {{ref}}?
* beside of that there is no changes in json.facets at all.
* the how-to is 
** tag the {{q=\{!parent tag=top}}
** have {{fq=type:parent}} 
** exclude it in {{domain:\{excludeTags:top}}}
** join expanded parents to children (might be a performance penalty)
** filter them again with filter exclusion {{filter:"\{!filters param=$chq 
excludeTags=color"}}

in addition to earlier TODO extract {{excludeTags}} code reuse it between bjq 
and fiters, btw, can bjq be descendant of that {{\{!filters}}?  

[~werder] the difference between {{global}} and {{excludeTags=top} is that 
former selects {{*:*}} and exclusion might end-up with MatchNoDocs.   

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2018-02-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370625#comment-16370625
 ] 

Uwe Schindler commented on SOLR-11078:
--

I'd use the good old numericfield encoding. Or create a bytesref with the raw 
bytes of the long, just applying the usual bitmagic to make negatives sort 
correct. For floats do the other bitshifts from NunericUtils. There are methods 
for it.

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, 
> solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, 
> solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9510:
---
Attachment: SOLR_9510.patch

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12008) Remove log4j.properties file in solr/example/resources (and perhaps others)

2018-02-20 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-12008:
-

 Summary: Remove log4j.properties file in solr/example/resources 
(and perhaps others)
 Key: SOLR-12008
 URL: https://issues.apache.org/jira/browse/SOLR-12008
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Reporter: Erick Erickson
Assignee: Erick Erickson


As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to use 
%c, but the file in "solr/example/resources/log4j.properties" was not changed. 
That got me to looking around and there are a bunch of log4j.properties files:

./solr/core/src/test-files/log4j.properties
./solr/example/resources/log4j.properties
./solr/solrj/src/test-files/log4j.properties
./solr/server/resources/log4j.properties
./solr/server/scripts/cloud-scripts/log4j.properties
./solr/contrib/dataimporthandler/src/test-files/log4j.properties
./solr/contrib/clustering/src/test-files/log4j.properties
./solr/contrib/ltr/src/test-files/log4j.properties
./solr/test-framework/src/test-files/log4j.properties

Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
propose the logging configuration files get consolidated. The question is "how 
far"? 

I at least want to get rid of the one in solr/example, users should use the one 
in server/resources. Having to maintain these two separately is asking for 
trouble.

[~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
server/scripts/cloud-scripts?

Anyone else who has a clue about why the other properties files were created, 
especially the ones in contrib?

And what about all the ones in various test-files directories? People didn't 
create them for no reason, and I don't want to rediscover that it's a real pain 
to try to re-use the one in server/resources for instance.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 464 - Still unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/464/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost

Error Message:
The operations computed by ComputePlanAction should not be null 
SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
BEFORE_ACTION=[compute_plan, null]}

Stack Trace:
java.lang.AssertionError: The operations computed by ComputePlanAction should 
not be null SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, 
null], BEFORE_ACTION=[compute_plan, null]}
at 
__randomizedtesting.SeedInfo.seed([A6398EDFF83E473B:96F96F5D704CA667]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost(ComputePlanActionTest.java:291)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakContro

[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-02-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370574#comment-16370574
 ] 

Steve Rowe commented on LUCENE-8106:


BTW I noticed a side-effect of running the script as a Build step on Policeman 
Jenkins: if the Ant build step fails (e.g. precommit), then none of the 
following Build steps will be invoked.  Maybe this is ok?

An alternative would be to switch from a Build step to a Post-Build step, but 
this would require installing the Post Build Script plugin: 
https://github.com/jenkinsci/postbuildscript-plugin


> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2018-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370568#comment-16370568
 ] 

Shawn Heisey commented on SOLR-11078:
-

bq. What about faceting/pivoting and what about sorting? If there are some 
insights into their performance with point-fields, that would be great to know 
as well.

For these operations, the field should have docValues defined for best 
performance.  This would be the case for any field type -- string, Trie, Point, 
etc.  At the moment, it's not possible to enable docValues on a TextField type 
(the one used in Solr for tokenized terms).  But this is generally not a 
problem, as that type of field is normally not very useful for 
facets/groups/sorts.

bq. Also, do we know Lucene's response for this performance hit?

[~jpountz] probably qualifies as somebody who can comment here.  And I think 
what he wrote does serve as a response, but I would like more detail.  I'm not 
really familiar with how Solr actually leverages Lucene for its field types.  I 
have *VERY* little understanding of the Lucene API -- have never actually 
written a program that uses Lucene.  I'm not opposed to learning, but every 
time I have started descending the rabbit hole, I have found myself getting 
lost and unable to figure out how it all fits together.

bq. something like NumericIdField, that works exactly like StrField but uses an 
encoding that preserves the numeric order.

Adrien, I couldn't find any info on this. Can you point me at some relevant 
javadocs, or other documentation?  Hopefully it's not the old "zero-padded 
string" idea, which as far as I knew, wasn't used any more.


> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, 
> solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, 
> solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4452 - Still unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4452/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([E4FC347885C8E5FB:6CA80BA22B348803]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAda

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+43) - Build # 1396 - Still unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1396/
Java: 64bit/jdk-10-ea+43 -XX:-UseCompressedOops -XX:+UseParallelGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest: 1) 
Thread[id=83, name=qtp2033452555-83, state=TIMED_WAITING, 
group=TGRP-LegacyQueryFacetCloudTest] at 
java.base@10/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2205)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@10/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest: 
   1) Thread[id=83, name=qtp2033452555-83, state=TIMED_WAITING, 
group=TGRP-LegacyQueryFacetCloudTest]
at java.base@10/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2205)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@10/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([464E32CEFC406BD4]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=83, 
name=qtp2033452555-83, state=TIMED_WAITING, 
group=TGRP-LegacyQueryFacetCloudTest] at 
java.base@10/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2205)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@10/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=83, name=qtp2033452555-83, state=TIMED_WAITING, 
group=TGRP-LegacyQueryFacetCloudTest]
at java.base@10/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2205)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@10/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([464E32CEFC406BD4]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyRangeFacetCloudTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.analytics.legacy.facet.LegacyRangeFacetCloudTest: 1) 
Thread[id=51, name=qtp498975676-51, state=TIMED_WAITING, 
group=TGRP-LegacyRangeFacetCloudTest]   

[jira] [Commented] (SOLR-12005) Solr should have the option of logging all jars loaded

2018-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370497#comment-16370497
 ] 

Shawn Heisey commented on SOLR-12005:
-

{quote}
Dunno, If you specify the -v option when starting Solr, does that provide the 
information you want?
{quote}

Yes ... but it also logs thousands of lines of cruft that is simply not useful 
for basic troubleshooting when the issue is *probably* user error. DEBUG logs 
are useful for problems where we suspect that Solr has a bug, to verify whether 
or not it's working by seeing internal state as the code runs. But even then, 
it's more useful to turn on DEBUG logging for individual classes (either in 
log4j.properties or in the admin UI), not the entire logging config.

We could figure out how to modify log4j.properties so that SolrResourceLoader 
is increased to DEBUG, and have users turn that on when troubleshooting these 
issues. If that's really what everyone thinks we should do, it can even go in 
the reference guide, and it could be in log4j.properties already, but 
commented.  I am not opposed to this idea, it just wasn't the first thing I 
thought of.

For my servers, the change would result in eight additional lines being logged 
by default – the eight jars that I have in ${solr.solr.home}/lib. Or perhaps 
nine lines, as there might be a line indicating which directory is being 
examined. I do not have  elements in any solrconfig.xml file. This isn't 
eight lines for every core (of which I have a couple dozen per server) ... but 
eight lines total.  For out of the box users, there would be no increase in 
logging.

{noformat}
2018-02-20 18:38:18.768 DEBUG (main) [   ] o.a.s.c.SolrResourceLoader Adding 
'file:/index/solr6/data/lib/solr-dataimporthandler-6.6.2-SNAPSHOT.jar' to 
classloader
2018-02-20 18:38:18.768 DEBUG (main) [   ] o.a.s.c.SolrResourceLoader Adding 
'file:/index/solr6/data/lib/ncSolrUpdateProcessors.jar' to classloader
2018-02-20 18:38:18.768 DEBUG (main) [   ] o.a.s.c.SolrResourceLoader Adding 
'file:/index/solr6/data/lib/jblas-1.2.4.jar' to classloader
2018-02-20 18:38:18.768 DEBUG (main) [   ] o.a.s.c.SolrResourceLoader Adding 
'file:/index/solr6/data/lib/icu4j-56.1.jar' to classloader
2018-02-20 18:38:18.768 DEBUG (main) [   ] o.a.s.c.SolrResourceLoader Adding 
'file:/index/solr6/data/lib/mysql-connector-java-5.1.40-bin.jar' to classloader
2018-02-20 18:38:18.768 DEBUG (main) [   ] o.a.s.c.SolrResourceLoader Adding 
'file:/index/solr6/data/lib/CJKFoldingFilter.jar' to classloader
2018-02-20 18:38:18.768 DEBUG (main) [   ] o.a.s.c.SolrResourceLoader Adding 
'file:/index/solr6/data/lib/lucene-analyzers-icu-6.6.2-SNAPSHOT.jar' to 
classloader
2018-02-20 18:38:18.768 DEBUG (main) [   ] o.a.s.c.SolrResourceLoader Adding 
'file:/index/solr6/data/lib/pixolution_flow3.2.0_solr6.6.0.jar' to classloader
{noformat}

If the user enables a config option to turn on additional jar logging, there 
may be a lot more lines logged. This is what Solr 7.2.1 logs out of the box for 
a core startup of a core using the _default configset:

{code:java}
2018-02-20 18:59:22.041 INFO  (coreLoadExecutor-6-thread-1) [   x:foo] 
o.a.s.c.SolrResourceLoader [foo] Added 53 libs to classloader, from paths: 
[/C:/Users/sheisey/Downloads/solr-7.2.1/contrib/clustering/lib, 
/C:/Users/sheisey/Downloads/solr-7.2.1/contrib/extraction/lib, 
/C:/Users/sheisey/Downloads/solr-7.2.1/contrib/langid/lib, 
/C:/Users/sheisey/Downloads/solr-7.2.1/contrib/velocity/lib, 
/C:/Users/sheisey/Downloads/solr-7.2.1/dist]
{code}

That's 53 files loaded for one core, and the  config in solrconfig.xml 
ensures they're all jar files. I totally understand the desire to eliminate 
those lines from the default logging. For the average user that never uses any 
custom plugins or anything from contrib (like analysis-extras) that is not in 
the default example, that information doesn't help them, and may actually 
confuse them.

If the solr.xml option I've mentioned were enabled and the user's server had a 
handful of cores with solrconfig.xml files mostly unchanged from the default 
example, they would have a few hundred extra lines in their log. Each extra 
line would not be horrendously verbose, and the information is usually relevant 
to somebody trying to figure out a problem that involves that config option. 
And also remember that these lines would NOT be logged without enabling the 
config.


> Solr should have the option of logging all jars loaded
> --
>
> Key: SOLR-12005
> URL: https://issues.apache.org/jira/browse/SOLR-12005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>Priority: Major
>
> Solr used to explicitly log the filename of every jar it loaded.  It seem

[jira] [Commented] (SOLR-11982) Add support for preferReplicaTypes parameter

2018-02-20 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370495#comment-16370495
 ] 

Erick Erickson commented on SOLR-11982:
---

[~emaijala] Is this bit a typo in the docs? Shouldn't it be 
"preferReplicaTypes=PULL,TLOG" rather than "preferLocalShards"???

Solr allows you to pass an optional string parameter named `preferReplicaTypes` 
to indicate that a distributed query should prefer replicas of given types when 
available. In other words, if a query includes e.g. 
`*preferLocalShards*=PULL,TLOG`,...

[~cpoerschke] One trick for whitespace bits if you use IntelliJ is to
> apply the patch
> command-9 to show local changes
> bring the context menu up on the first one and select diff...
> up in the upper left there's a drop-down and one of the choices is "ignore 
> whitespace/newlines" or something like that.

I agree though it's better to not reformat lots of whitespace

Also, one option in IntelliJ that should be clicked is to auto-format only 
lines changed. There's an option somewhere in the IntelliJ preferences to 
insure this



> Add support for preferReplicaTypes parameter
> 
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0), 7.3
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Attachments: SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily prefer certain replica 
> types in a similar fashion to preferLocalShards. I'll be coming up with a 
> patch that allows one to specify e.g. preferReplicaTypes=PULL,TLOG which 
> would mean that NRT replicas wouldn't be hit with queries unless they're the 
> only ones available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11982) Add support for preferReplicaTypes parameter

2018-02-20 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370480#comment-16370480
 ] 

Christine Poerschke edited comment on SOLR-11982 at 2/20/18 7:14 PM:
-

Hello [~emaijala],

thanks for opening this ticket and attaching a patch with tests and solr ref 
guide documentation update.

I agree the ability to prefer certain replica types would be a nice new feature.

Question: would {{preferReplicaTypes=PULL,TLOG}} and 
{{preferReplicaTypes=TLOG,PULL}} be equivalent or would there be a difference 
i.e. the first type most preferred, the second type next preferred, and any 
other unmentioned types all equivally unpreferred?

Thanks for clarifying (in the documentation update part of the patch) the 
intended behaviour when both the existing {{preferLocalShards}} and the new 
{{preferReplicaTypes}} are specified.
{code}
+This parameter overrides `preferLocalShards=true`. Both can be defined, but 
replicas of preferred types are always selected over local shards.
{code}

{{preferReplicaTypes overrides preferLocalShards}} vs. {{preferLocalShards 
overrides preferReplicaTypes}} - what might the use cases in either scenario 
be? Or perhaps the two parameters could be mutually exclusive for less 
potential user confusion and a simpler implementation. What do you think?

Specific patch feedback from taking only a quick look: there seem to be quite a 
few whitespace change formatting changes which makes it tricky to 'see' the 
actual changes. Steps along the following lines could be one way to try and 
undo those whitespace reformats:
{code}
git checkout -b master-solr-11982 -t origin/master
git apply SOLR-11982.patch
git diff -w > temp.patch
git checkout HEAD --
git apply --ignore-whitespace temp.patch
{code}


was (Author: cpoerschke):
Hello [~emaijala],

thanks for opening this ticket and attaching a patch with tests and solr ref 
guide documentation update.

I agree the ability to prefer certain replica types would be a nice new feature.

Question: would {{preferReplicaTypes=PULL,TLOG}} and 
{{preferReplicaTypes=TLOG,PULL}} be equivalent or would there be a difference 
i.e. the first type most preferred, the second type next preferred, and any 
other unmentioned types all equivally unpreferred?

Thanks for clarifying (in the documentation update part of the patch) the 
intended behaviour when both the existing {{preferLocalShards}} and the new 
{{preferReplicaTypes}} are specified.
{code}
+This parameter overrides `preferLocalShards=true`. Both can be defined, but 
replicas of preferred types are always selected over local shards.
{code}

{{preferReplicaTypes overrides preferLocalShards}} vs. {{preferLocalShards vs. 
preferReplicaTypes}} - what might the use cases in either scenario be? Or 
perhaps the two parameters could be mutually exclusive for less potential user 
confusion and a simpler implementation. What do you think?

Specific patch feedback from taking only a quick look: there seem to be quite a 
few whitespace change formatting changes which makes it tricky to 'see' the 
actual changes. Steps along the following lines could be one way to try and 
undo those whitespace reformats:
{code}
git checkout -b master-solr-11982 -t origin/master
git apply SOLR-11982.patch
git diff -w > temp.patch
git checkout HEAD --
git apply --ignore-whitespace temp.patch
{code}

> Add support for preferReplicaTypes parameter
> 
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0), 7.3
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Attachments: SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily prefer certain replica 
> types in a similar fashion to preferLocalShards. I'll be coming up with a 
> patch that allows one to specify e.g. preferReplicaTypes=PULL,TLOG which 
> would mean that NRT replicas wouldn't be hit with queries unless they're the 
> only ones available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for preferReplicaTypes parameter

2018-02-20 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370480#comment-16370480
 ] 

Christine Poerschke commented on SOLR-11982:


Hello [~emaijala],

thanks for opening this ticket and attaching a patch with tests and solr ref 
guide documentation update.

I agree the ability to prefer certain replica types would be a nice new feature.

Question: would {{preferReplicaTypes=PULL,TLOG}} and 
{{preferReplicaTypes=TLOG,PULL}} be equivalent or would there be a difference 
i.e. the first type most preferred, the second type next preferred, and any 
other unmentioned types all equivally unpreferred?

Thanks for clarifying (in the documentation update part of the patch) the 
intended behaviour when both the existing {{preferLocalShards}} and the new 
{{preferReplicaTypes}} are specified.
{code}
+This parameter overrides `preferLocalShards=true`. Both can be defined, but 
replicas of preferred types are always selected over local shards.
{code}

{{preferReplicaTypes overrides preferLocalShards}} vs. {{preferLocalShards vs. 
preferReplicaTypes}} - what might the use cases in either scenario be? Or 
perhaps the two parameters could be mutually exclusive for less potential user 
confusion and a simpler implementation. What do you think?

Specific patch feedback from taking only a quick look: there seem to be quite a 
few whitespace change formatting changes which makes it tricky to 'see' the 
actual changes. Steps along the following lines could be one way to try and 
undo those whitespace reformats:
{code}
git checkout -b master-solr-11982 -t origin/master
git apply SOLR-11982.patch
git diff -w > temp.patch
git checkout HEAD --
git apply --ignore-whitespace temp.patch
{code}

> Add support for preferReplicaTypes parameter
> 
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0), 7.3
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Attachments: SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily prefer certain replica 
> types in a similar fashion to preferLocalShards. I'll be coming up with a 
> patch that allows one to specify e.g. preferReplicaTypes=PULL,TLOG which 
> would mean that NRT replicas wouldn't be hit with queries unless they're the 
> only ones available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2018-02-20 Thread Sachin Goyal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370472#comment-16370472
 ] 

Sachin Goyal commented on SOLR-11078:
-

Thanks [~jpountz] for a good detailed reply.

So it seems that point-fields are great for range-searches and 
few-document-matching single-field queries.

What about faceting/pivoting and what about sorting? If there are some insights 
into their performance with point-fields, that would be great to know as well.

 

Also, do we know Lucene's response for this performance hit? If Solr is just 
catching up to Lucene's deprecation/adoption of types, then I would imagine 
them knowing this already and maybe having some fix or a recommendation?

 

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, 
> solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, 
> solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-11912) TriggerIntegrationTest fails a lot, reproducibly

2018-02-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reopened SOLR-11912:
--

This test keeps failing very frequently in Jenkins

> TriggerIntegrationTest fails a lot, reproducibly
> 
>
> Key: SOLR-11912
> URL: https://issues.apache.org/jira/browse/SOLR-11912
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0), 7.3
>
>
> Multiple tests in this suite are not just flaky, but are failing reproducibly.
> From Hoss'ss report for the last 24 hours 
> [http://fucit.org/solr-jenkins-reports/reports/24hours-method-failures.csv]:
> {noformat}
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testCooldown,thetaphi/Lucene-Solr-master-Linux/21346/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testEventFromRestoredState,apache/Lucene-Solr-NightlyTests-7.x/131/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testEventFromRestoredState,sarowe/Lucene-Solr-tests-master/14874/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testEventFromRestoredState,thetaphi/Lucene-Solr-7.x-Solaris/412/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testEventFromRestoredState,thetaphi/Lucene-Solr-master-MacOSX/4408/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testListeners,thetaphi/Lucene-Solr-master-Windows/7140/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testMetricTrigger,apache/Lucene-Solr-Tests-7.x/334/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testMetricTrigger,sarowe/Lucene-Solr-tests-7.x/2526/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testMetricTrigger,thetaphi/Lucene-Solr-7.x-Linux/1243/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testMetricTrigger,thetaphi/Lucene-Solr-7.x-Windows/424/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testMetricTrigger,thetaphi/Lucene-Solr-master-Linux/21344/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testMetricTrigger,thetaphi/Lucene-Solr-master-Linux/21345/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testMetricTrigger,thetaphi/Lucene-Solr-master-Linux/21350/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testNodeAddedTrigger,thetaphi/Lucene-Solr-master-Windows/7139/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testNodeAddedTriggerRestoreState,apache/Lucene-Solr-Tests-7.x/334/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testNodeAddedTriggerRestoreState,thetaphi/Lucene-Solr-7.x-Solaris/412/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testNodeAddedTriggerRestoreState,thetaphi/Lucene-Solr-master-MacOSX/4408/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testNodeLostTrigger,thetaphi/Lucene-Solr-7.x-Solaris/413/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testNodeLostTrigger,thetaphi/Lucene-Solr-master-Linux/21351/
> org.apache.solr.cloud.autoscaling.TriggerIntegrationTest,testNodeLostTriggerRestoreState,thetaphi/Lucene-Solr-master-MacOSX/440
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2353 - Still Failing

2018-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2353/

5 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin

Error Message:
expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([7C0C821FF129BF1A:C6DEED677207510F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin(TestContentStreamDataSource.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expe

[jira] [Resolved] (LUCENE-8179) StandardTokenizer doesn't tokenize the word "system" but it works for the plural "systems"

2018-02-20 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8179.
---
Resolution: Not A Problem

> StandardTokenizer doesn't tokenize the word "system" but it works for the 
> plural "systems"
> --
>
> Key: LUCENE-8179
> URL: https://issues.apache.org/jira/browse/LUCENE-8179
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.10.4
>Reporter: Joanita Dsouza
>Priority: Major
> Attachments: TokenizerBug.java, TokenizerBugRevised.java
>
>
> Hi,
> We use the Standard tokenizer to tokenize text. The Standard Tokenizer 
> tokenizes 'systems' correctly, but it fails to tokenize 'system' Attached a 
> small program to demo this.
> Is this a known issue.Is there a way to fix it? I have tried a few different 
> text examples with different stop words and only this word seems to show this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 466 - Still Unstable!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/466/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([C48EC95125C7DC01:4CDAF68B8B3BB1F9]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([C48EC95125C7DC01:7E82FEDE7A2F0A4E]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.j

[jira] [Commented] (LUCENE-8180) Explore using (Future)Arrays.mismatch for FixedBitSet.nextSetBit

2018-02-20 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370402#comment-16370402
 ] 

Adrien Grand commented on LUCENE-8180:
--

I was mostly thinking about the moderately sparse case (eg. about 1/30th of 
bits set) on a large index with matches that are not uniformly spread across 
the doc ID space. I can't tell how common it is but I wouldn't be surprised 
that it was not so uncommon, and in that case there could be some long runs of 
zeros.

> Explore using (Future)Arrays.mismatch for FixedBitSet.nextSetBit
> 
>
> Key: LUCENE-8180
> URL: https://issues.apache.org/jira/browse/LUCENE-8180
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Trivial
>  Labels: newdev
>
> Using Arrays.mismatch with a fixed-size array full of zeros might help find 
> the next long that is not 0 faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:master: LUCENE-8153: Make impacts checks lighter by default.

2018-02-20 Thread Adrien Grand
Thanks Christine. I had dealt with it locally for forgot to commit changes.
I'll fix now.

Le mar. 20 févr. 2018 à 18:25, Christine Poerschke (BLOOMBERG/ LONDON) <
cpoersc...@bloomberg.net> a écrit :

> Hello.
>
> System.err.println("-doSlowChecks is deprecated, use -slow instead");
>
> seems to break precommit (at least for me) on the forbidden-apis check?
>
> The bot hasn't yet updated the LUCENE-8153 ticket itself with this commit,
> hence replying here instead of there.
>
> Christine
>
> - Original Message -
> From: dev@lucene.apache.org
> To: comm...@lucene.apache.org
> At: 02/20/18 16:14:23
>
> Repository: lucene-solr
> Updated Branches:
>   refs/heads/master 291248c75 -> 317a2e0c3
>
>
> LUCENE-8153: Make impacts checks lighter by default.
>
> The new `-slow` switch makes checks more complete but also more heavy. This
> option also cross-checks term vectors.
>
>
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/317a2e0c
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/317a2e0c
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/317a2e0c
>
> Branch: refs/heads/master
> Commit: 317a2e0c3d16b9f8ea6ed1b1e4697c5cec51d05c
> Parents: 291248c
> Author: Adrien Grand 
> Authored: Tue Feb 20 15:55:58 2018 +0100
> Committer: Adrien Grand 
> Committed: Tue Feb 20 17:14:11 2018 +0100
>
> --
>  .../org/apache/lucene/index/CheckIndex.java | 244 ++-
>  .../apache/lucene/index/TestIndexWriter.java|   2 +-
>  .../lucene/store/BaseDirectoryWrapper.java  |   8 +-
>  .../java/org/apache/lucene/util/TestUtil.java   |  14 +-
>  4 files changed, 139 insertions(+), 129 deletions(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/317a2e0c/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
> --
> diff --git a/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
> b/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
> index 7dd1aa9..54a227c 100644
> --- a/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
> +++ b/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
> @@ -429,18 +429,17 @@ public final class CheckIndex implements Closeable {
>  IOUtils.close(writeLock);
>}
>
> -  private boolean crossCheckTermVectors;
> +  private boolean doSlowChecks;
>
> -  /** If true, term vectors are compared against postings to
> -   *  make sure they are the same.  This will likely
> +  /** If true, additional slow checks are performed.  This will likely
> *  drastically increase time it takes to run CheckIndex! */
> -  public void setCrossCheckTermVectors(boolean v) {
> -crossCheckTermVectors = v;
> +  public void setDoSlowChecks(boolean v) {
> +doSlowChecks = v;
>}
>
> -  /** See {@link #setCrossCheckTermVectors}. */
> -  public boolean getCrossCheckTermVectors() {
> -return crossCheckTermVectors;
> +  /** See {@link #setDoSlowChecks}. */
> +  public boolean doSlowChecks() {
> +return doSlowChecks;
>}
>
>private boolean failFast;
> @@ -745,13 +744,13 @@ public final class CheckIndex implements Closeable {
>segInfoStat.fieldNormStatus = testFieldNorms(reader,
> infoStream, failFast);
>
>// Test the Term Index
> -  segInfoStat.termIndexStatus = testPostings(reader, infoStream,
> verbose, failFast);
> +  segInfoStat.termIndexStatus = testPostings(reader, infoStream,
> verbose, doSlowChecks, failFast);
>
>// Test Stored Fields
>segInfoStat.storedFieldStatus = testStoredFields(reader,
> infoStream, failFast);
>
>// Test Term Vectors
> -  segInfoStat.termVectorStatus = testTermVectors(reader,
> infoStream, verbose, crossCheckTermVectors, failFast);
> +  segInfoStat.termVectorStatus = testTermVectors(reader,
> infoStream, verbose, doSlowChecks, failFast);
>
>// Test Docvalues
>segInfoStat.docValuesStatus = testDocValues(reader, infoStream,
> failFast);
> @@ -1210,7 +1209,7 @@ public final class CheckIndex implements Closeable {
> * checks Fields api is consistent with itself.
> * searcher is optional, to verify with queries. Can be null.
> */
> -  private static Status.TermIndexStatus checkFields(Fields fields, Bits
> liveDocs, int maxDoc, FieldInfos fieldInfos, boolean doPrint, boolean
> isVectors, PrintStream infoStream, boolean verbose) throws IOException {
> +  private static Status.TermIndexStatus checkFields(Fields fields, Bits
> liveDocs, int maxDoc, FieldInfos fieldInfos, boolean doPrint, boolean
> isVectors, PrintStream infoStream, boolean verbose, boolean doSlowChecks)
> throws IOException {
>  // TODO: we should probably return our own stats thing...?!
>

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_162) - Build # 21499 - Still Failing!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21499/
Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([CA0DDEF98B5F4496:4259E12325A3296E]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13164 lines...]
   [junit4] Suite: org.apache.solr.cloud.ReplaceNodeNoTargetTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeNoTargetTest_CA0DDEF98B5F4496-001/init-core-data-001
   [junit4]

silent "ASF subversion and git services" bot?

2018-02-20 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Has anyone else also noticed or already followed up (with INFRA?) on the lack 
of "ASF subversion and git services" updates on the LUCENE and SOLR tickets?

This one seems to be the most recent one.

Christine

- Original Message -
From: dev@lucene.apache.org
To: dev@lucene.apache.org
At: 02/16/18 15:47:09


[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367489#comment-16367489
 ] 

ASF subversion and git services commented on LUCENE-8106:
-

Commit 1a6d896dfc1bdafff3067a513bade205f6e1ad11 in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a6d896 ]

LUCENE-8106: always fast-forward merge after checkout


> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (LUCENE-8179) StandardTokenizer doesn't tokenize the word "system" but it works for the plural "systems"

2018-02-20 Thread Joanita Dsouza (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370341#comment-16370341
 ] 

Joanita Dsouza commented on LUCENE-8179:


Thanks [~romseygeek] .I think I need to change our stop words list :)

> StandardTokenizer doesn't tokenize the word "system" but it works for the 
> plural "systems"
> --
>
> Key: LUCENE-8179
> URL: https://issues.apache.org/jira/browse/LUCENE-8179
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.10.4
>Reporter: Joanita Dsouza
>Priority: Major
> Attachments: TokenizerBug.java, TokenizerBugRevised.java
>
>
> Hi,
> We use the Standard tokenizer to tokenize text. The Standard Tokenizer 
> tokenizes 'systems' correctly, but it fails to tokenize 'system' Attached a 
> small program to demo this.
> Is this a known issue.Is there a way to fix it? I have tried a few different 
> text examples with different stop words and only this word seems to show this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re:lucene-solr:master: LUCENE-8153: Make impacts checks lighter by default.

2018-02-20 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Hello.

System.err.println("-doSlowChecks is deprecated, use -slow instead");

seems to break precommit (at least for me) on the forbidden-apis check?

The bot hasn't yet updated the LUCENE-8153 ticket itself with this commit, 
hence replying here instead of there.

Christine

- Original Message -
From: dev@lucene.apache.org
To: comm...@lucene.apache.org
At: 02/20/18 16:14:23

Repository: lucene-solr
Updated Branches:
  refs/heads/master 291248c75 -> 317a2e0c3


LUCENE-8153: Make impacts checks lighter by default.

The new `-slow` switch makes checks more complete but also more heavy. This
option also cross-checks term vectors.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/317a2e0c
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/317a2e0c
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/317a2e0c

Branch: refs/heads/master
Commit: 317a2e0c3d16b9f8ea6ed1b1e4697c5cec51d05c
Parents: 291248c
Author: Adrien Grand 
Authored: Tue Feb 20 15:55:58 2018 +0100
Committer: Adrien Grand 
Committed: Tue Feb 20 17:14:11 2018 +0100

--
 .../org/apache/lucene/index/CheckIndex.java | 244 ++-
 .../apache/lucene/index/TestIndexWriter.java|   2 +-
 .../lucene/store/BaseDirectoryWrapper.java  |   8 +-
 .../java/org/apache/lucene/util/TestUtil.java   |  14 +-
 4 files changed, 139 insertions(+), 129 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/317a2e0c/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
--
diff --git a/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java 
b/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
index 7dd1aa9..54a227c 100644
--- a/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
+++ b/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
@@ -429,18 +429,17 @@ public final class CheckIndex implements Closeable {
 IOUtils.close(writeLock);
   }
 
-  private boolean crossCheckTermVectors;
+  private boolean doSlowChecks;
 
-  /** If true, term vectors are compared against postings to
-   *  make sure they are the same.  This will likely
+  /** If true, additional slow checks are performed.  This will likely
*  drastically increase time it takes to run CheckIndex! */
-  public void setCrossCheckTermVectors(boolean v) {
-crossCheckTermVectors = v;
+  public void setDoSlowChecks(boolean v) {
+doSlowChecks = v;
   }
 
-  /** See {@link #setCrossCheckTermVectors}. */
-  public boolean getCrossCheckTermVectors() {
-return crossCheckTermVectors;
+  /** See {@link #setDoSlowChecks}. */
+  public boolean doSlowChecks() {
+return doSlowChecks;
   }
 
   private boolean failFast;
@@ -745,13 +744,13 @@ public final class CheckIndex implements Closeable {
   segInfoStat.fieldNormStatus = testFieldNorms(reader, infoStream, 
failFast);
 
   // Test the Term Index
-  segInfoStat.termIndexStatus = testPostings(reader, infoStream, 
verbose, failFast);
+  segInfoStat.termIndexStatus = testPostings(reader, infoStream, 
verbose, doSlowChecks, failFast);
 
   // Test Stored Fields
   segInfoStat.storedFieldStatus = testStoredFields(reader, infoStream, 
failFast);
 
   // Test Term Vectors
-  segInfoStat.termVectorStatus = testTermVectors(reader, infoStream, 
verbose, crossCheckTermVectors, failFast);
+  segInfoStat.termVectorStatus = testTermVectors(reader, infoStream, 
verbose, doSlowChecks, failFast);
 
   // Test Docvalues
   segInfoStat.docValuesStatus = testDocValues(reader, infoStream, 
failFast);
@@ -1210,7 +1209,7 @@ public final class CheckIndex implements Closeable {
* checks Fields api is consistent with itself.
* searcher is optional, to verify with queries. Can be null.
*/
-  private static Status.TermIndexStatus checkFields(Fields fields, Bits 
liveDocs, int maxDoc, FieldInfos fieldInfos, boolean doPrint, boolean 
isVectors, PrintStream infoStream, boolean verbose) throws IOException {
+  private static Status.TermIndexStatus checkFields(Fields fields, Bits 
liveDocs, int maxDoc, FieldInfos fieldInfos, boolean doPrint, boolean 
isVectors, PrintStream infoStream, boolean verbose, boolean doSlowChecks) 
throws IOException {
 // TODO: we should probably return our own stats thing...?!
 long startNS;
 if (doPrint) {
@@ -1600,104 +1599,112 @@ public final class CheckIndex implements Closeable {
   }
 }
 
-// Test score blocks
-// We only score on freq to keep things simple and not pull norms
-SimScorer scorer = new SimScorer(field) {
-  @Override
-  public float score(float freq, long norm) {
-retu

[jira] [Commented] (LUCENE-8179) StandardTokenizer doesn't tokenize the word "system" but it works for the plural "systems"

2018-02-20 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370277#comment-16370277
 ] 

Alan Woodward commented on LUCENE-8179:
---

>From a quick look, it seems that all the terms in your example sentence are 
>stop words, so the resulting TokenStream is empty.

> StandardTokenizer doesn't tokenize the word "system" but it works for the 
> plural "systems"
> --
>
> Key: LUCENE-8179
> URL: https://issues.apache.org/jira/browse/LUCENE-8179
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.10.4
>Reporter: Joanita Dsouza
>Priority: Major
> Attachments: TokenizerBug.java, TokenizerBugRevised.java
>
>
> Hi,
> We use the Standard tokenizer to tokenize text. The Standard Tokenizer 
> tokenizes 'systems' correctly, but it fails to tokenize 'system' Attached a 
> small program to demo this.
> Is this a known issue.Is there a way to fix it? I have tried a few different 
> text examples with different stop words and only this word seems to show this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8179) StandardTokenizer doesn't tokenize the word "system" but it works for the plural "systems"

2018-02-20 Thread Joanita Dsouza (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370266#comment-16370266
 ] 

Joanita Dsouza commented on LUCENE-8179:


Actually, we use a custom analyzer which uses a stop filter with a list of stop 
words.This list contains 'system'.

WhenI run the program in the microservice, it doesn't go into the 
while(ts.incrementToken()) loop. But when the text has the plural word 
'systems' it goes in the loop and creates the terms just fine.

> StandardTokenizer doesn't tokenize the word "system" but it works for the 
> plural "systems"
> --
>
> Key: LUCENE-8179
> URL: https://issues.apache.org/jira/browse/LUCENE-8179
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.10.4
>Reporter: Joanita Dsouza
>Priority: Major
> Attachments: TokenizerBug.java, TokenizerBugRevised.java
>
>
> Hi,
> We use the Standard tokenizer to tokenize text. The Standard Tokenizer 
> tokenizes 'systems' correctly, but it fails to tokenize 'system' Attached a 
> small program to demo this.
> Is this a known issue.Is there a way to fix it? I have tried a few different 
> text examples with different stop words and only this word seems to show this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8179) StandardTokenizer doesn't tokenize the word "system" but it works for the plural "systems"

2018-02-20 Thread Joanita Dsouza (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370266#comment-16370266
 ] 

Joanita Dsouza edited comment on LUCENE-8179 at 2/20/18 5:02 PM:
-

[~romseygeek] , Actually, we use a custom analyzer which uses a stop filter 
with a list of stop words.This list contains 'system'.

WhenI run the program in the microservice, it doesn't go into the 
while(ts.incrementToken()) loop. But when the text has the plural word 
'systems' it goes in the loop and creates the terms just fine.


was (Author: joanitad):
Actually, we use a custom analyzer which uses a stop filter with a list of stop 
words.This list contains 'system'.

WhenI run the program in the microservice, it doesn't go into the 
while(ts.incrementToken()) loop. But when the text has the plural word 
'systems' it goes in the loop and creates the terms just fine.

> StandardTokenizer doesn't tokenize the word "system" but it works for the 
> plural "systems"
> --
>
> Key: LUCENE-8179
> URL: https://issues.apache.org/jira/browse/LUCENE-8179
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.10.4
>Reporter: Joanita Dsouza
>Priority: Major
> Attachments: TokenizerBug.java, TokenizerBugRevised.java
>
>
> Hi,
> We use the Standard tokenizer to tokenize text. The Standard Tokenizer 
> tokenizes 'systems' correctly, but it fails to tokenize 'system' Attached a 
> small program to demo this.
> Is this a known issue.Is there a way to fix it? I have tried a few different 
> text examples with different stop words and only this word seems to show this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8179) StandardTokenizer doesn't tokenize the word "system" but it works for the plural "systems"

2018-02-20 Thread Joanita Dsouza (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joanita Dsouza updated LUCENE-8179:
---
Attachment: TokenizerBugRevised.java

> StandardTokenizer doesn't tokenize the word "system" but it works for the 
> plural "systems"
> --
>
> Key: LUCENE-8179
> URL: https://issues.apache.org/jira/browse/LUCENE-8179
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.10.4
>Reporter: Joanita Dsouza
>Priority: Major
> Attachments: TokenizerBug.java, TokenizerBugRevised.java
>
>
> Hi,
> We use the Standard tokenizer to tokenize text. The Standard Tokenizer 
> tokenizes 'systems' correctly, but it fails to tokenize 'system' Attached a 
> small program to demo this.
> Is this a known issue.Is there a way to fix it? I have tried a few different 
> text examples with different stop words and only this word seems to show this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7182 - Failure!

2018-02-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7182/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMultiMMap

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_5AEC66B73E74E4B-001\testSeekSliceZero-025:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_5AEC66B73E74E4B-001\testSeekSliceZero-025
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_5AEC66B73E74E4B-001\testSeekSliceZero-025:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_5AEC66B73E74E4B-001\testSeekSliceZero-025

at __randomizedtesting.SeedInfo.seed([5AEC66B73E74E4B]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.analytics.NoFacetTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper\server1\data\version-2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper\server1\data\version-2

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper\server1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper\server1\data

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper\server1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper\server1

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper\server1\data\version-2\log.1:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.NoFacetTest_1C1E3227DE73D9B0-001\tempDir-001\zookeeper\server1\data\version-

[jira] [Commented] (SOLR-9510) child level facet exclusions

2018-02-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370248#comment-16370248
 ] 

Mikhail Khludnev commented on SOLR-9510:


hey, yo.. almost without changes in existing code
{code}
 "q", "{!parent tag=top filters=$child.fq which=type_s:book v=$childquery}"
, "childquery", "comment_t:*"
, "child.fq", "{!tag=author}author_s:dan"
, "child.fq", "{!tag=stars}stars_i:4"
, "fq", "type_s:book"
, "fl", "id", "fl" , "title_t"
, "json.facet", "{" +
"  comments_for_author: {" +
"domain: { excludeTags:\"top\"," + // 1. kick away top bjq, 
however, applying parent level fqs
"blockChildren : \"type_s:book\", " + // 2.getting all 
children from enlarged parents
" filter:[\"{!filters params=$child.fq " + // 3. filter 
children with exclusion 
"   excludeTags=author v=$childquery}\"]"
+ "}," +
"type:terms," +
"field:author_s," +
"facet: {" +
"   in_books: \"unique(_root_)\" }"+//}}," +
"  }" +
{code}
patch is coming


> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2018-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370239#comment-16370239
 ] 

Shawn Heisey commented on SOLR-11078:
-

If there are things we can do in Solr to improve the situation, we should 
absolutely be doing them.  I personally do not know enough about Lucene code 
writing to do anything.  If I did know enough, I would be doing everything I 
could to improve it.

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, 
> solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, 
> solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-20 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370228#comment-16370228
 ] 

Minoru Osuka commented on SOLR-11795:
-

I'm sorry, my patch file has mistakes.

- avoid invalid logging pattern.
- add jar checksum file to solr/licenses.
- fix duplicate section name for Ref Guide.

I attached new patch (SOLR-11795-8.patch) that fix it.

I apologize for causing you a trouble.

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795-8.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-02-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370226#comment-16370226
 ] 

Steve Rowe commented on LUCENE-8106:


bq.  git.exe is undefined everywhere - except the master Jenkins (Linux), which 
has GIT, but no other slave machine has GIT installed. They only use Jenkins' 
internal JGit client, no command line. There is no need to have Git for running 
Lucene builds (except packaging and jar version numbers, but Policeman does not 
use this - because it's optional for test builds).

{{reproduceJenkinsFailures.py}} modifies the checked out revision in three 
different ways:

# Checks out the revision at which the original failure(s) occurred (phase: 
initial setup)
# Checks out the tip of the branch on which the original failure occurred 
(phase: If any test reproduces 100%)
# Checks out the original workspace revision (phase: cleanup)

What do you think ought to be done on OS VMs without git?

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-20 Thread Minoru Osuka (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Minoru Osuka updated SOLR-11795:

Attachment: SOLR-11795-8.patch

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795-8.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-02-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370215#comment-16370215
 ] 

Steve Rowe commented on LUCENE-8106:


bq. On Linux, they don't have entries for those tools, so it falls back to 
build.xml's default behaviour. git.exe is undefined everywhere - except the 
master Jenkins (Linux), which has GIT, but no other slave machine has GIT 
installed.

Thanks, I've tried to capture that in a modified version of the script now on 
{{Lucene-Solr-master-Linux}}, we'll see how it goes next time it runs:

{noformat}
set -x # Log commands

TMPFILE=`mktemp`
trap "rm -f $TMPFILE" EXIT   # Delete the temp file on SIGEXIT

curl -o $TMPFILE 
https://jenkins.thetaphi.de/job/$JOB_NAME/$BUILD_NUMBER/consoleText

if grep --quiet 'reproduce with' $TMPFILE ; then

# Preserve original build output
mv lucene/build lucene/build.orig
mv solr/build solr/build.orig

PYTHON32_EXE=`grep "^[[:space:]]*python32\.exe[[:space:]]*=" 
~/lucene.build.properties | cut -d'=' -f2`
[ -z $PYTHON32_EXE ] && PYTHON32_EXE=python3
GIT_EXE=`grep "^[[:space:]]*git\.exe[[:space:]]*=" 
~/lucene.build.properties | cut -d'=' -f2`
[ -n $GIT_EXE ] && export PATH=$GIT_EXE:$PATH
$PYTHON32_EXE -u dev-tools/scripts/reproduceJenkinsFailures.py --no-fetch 
file://$TMPFILE

# Preserve repro build output
mv lucene/build lucene/build.repro
mv solr/build solr/build.repro

# Restore original build output
mv lucene/build.orig lucene/build
mv solr/build.orig solr/build
fi
{noformat}

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >