[jira] [Commented] (SOLR-10887) Add .xml extension to managed-schema

2017-06-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050032#comment-16050032
 ] 

David Smiley commented on SOLR-10887:
-

+1 !  This has been an annoyance of mine.

[~arafalov] had some fantastic comments on this in another issue that I will 
quote:
bq. On xml extension for managed-schema. Not having an XML extension means that 
file is a special case everywhere. Admin UI had several JIRAs because that file 
would not display properly, file system sorting is confusing, presentations 
that try to explain things are confusing. Even just viewing the example schemas 
on filesystem is confusing as there is no data-type mapping and editors do not 
open it up without explicit intervention.

> Add .xml extension to managed-schema
> 
>
> Key: SOLR-10887
> URL: https://issues.apache.org/jira/browse/SOLR-10887
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
>
> Discussions here SOLR-10574.
> There is consensus to renaming managed-schema back to managed-schema.xml. 
> Requires backcompat handling as mentioned in Yonik's comment:
> {code}
> there is back compat to consider. I'd also prefer that if it get changed, we 
> first look for "managed-schema.xml", then "managed-schema", and then 
> "schema.xml" to preserve back compat.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050029#comment-16050029
 ] 

David Smiley commented on SOLR-10574:
-

[~arafalov] to your proposal 5 days ago, it makes sense to me but I think the 
file extension of the schema is a distraction to this issue we're having a 
conversation on.  It's difficult to try to stay on-topic; credit to 
[~ichattopadhyaya] to deflecting it to SOLR-10887.

+0 to Ishan's latest patch... with a heavy sigh I see data-driven on by default 
and I'm going to have to start memorizing how to disable the darned thing.  
Commit away.  Hopefully warnings etc. can be added still?  Another issue?  I 
don't want us all to collectively feel the need to warn users (on solr-user or 
IRC or wherever) when they hit a problem related to data driven when Solr 
itself can warn them against this setting.

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch, 
> SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7500) Remove LeafReader.fields()

2017-06-14 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-7500.
--
Resolution: Fixed

Thanks for the code review [~mikemccand] and for your input earlier [~jpountz].

> Remove LeafReader.fields()
> --
>
> Key: LUCENE-7500
> URL: https://issues.apache.org/jira/browse/LUCENE-7500
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
> Attachments: LUCENE_7500_avoid_leafReader_fields.patch, 
> LUCENE_7500_avoid_leafReader_fields.patch, 
> LUCENE_7500_Remove_LeafReader_fields.patch, 
> LUCENE_7500_Remove_LeafReader_fields.patch, 
> LUCENE_7500_Remove_LeafReader_fields.patch
>
>
> {{Fields}} seems like a pointless intermediary between the {{LeafReader}} and 
> {{Terms}}. Why not have {{LeafReader.getTerms(fieldName)}} instead? One loses 
> the ability to get the count and iterate over indexed fields, but it's not 
> clear what real use-cases are for that and such rare needs could figure that 
> out with FieldInfos.
> [~mikemccand] pointed out that we'd probably need to re-introduce a 
> {{TermVectors}} class since TV's are row-oriented not column-oriented.  IMO 
> they should be column-oriented but that'd be a separate issue.
> _(p.s. I'm lacking time to do this w/i the next couple months so if someone 
> else wants to tackle it then great)_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7500) Remove LeafReader.fields()

2017-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049991#comment-16049991
 ] 

ASF subversion and git services commented on LUCENE-7500:
-

Commit abc393dbfdb805361747ef651393332968851f3d in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=abc393d ]

LUCENE-7500: Remove LeafReader.fields in lieu of LeafReader.terms.
Optimized MultiFields.getTerms.


> Remove LeafReader.fields()
> --
>
> Key: LUCENE-7500
> URL: https://issues.apache.org/jira/browse/LUCENE-7500
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
> Attachments: LUCENE_7500_avoid_leafReader_fields.patch, 
> LUCENE_7500_avoid_leafReader_fields.patch, 
> LUCENE_7500_Remove_LeafReader_fields.patch, 
> LUCENE_7500_Remove_LeafReader_fields.patch, 
> LUCENE_7500_Remove_LeafReader_fields.patch
>
>
> {{Fields}} seems like a pointless intermediary between the {{LeafReader}} and 
> {{Terms}}. Why not have {{LeafReader.getTerms(fieldName)}} instead? One loses 
> the ability to get the count and iterate over indexed fields, but it's not 
> clear what real use-cases are for that and such rare needs could figure that 
> out with FieldInfos.
> [~mikemccand] pointed out that we'd probably need to re-introduce a 
> {{TermVectors}} class since TV's are row-oriented not column-oriented.  IMO 
> they should be column-oriented but that'd be a separate issue.
> _(p.s. I'm lacking time to do this w/i the next couple months so if someone 
> else wants to tackle it then great)_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7878) Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric fields)

2017-06-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049982#comment-16049982
 ] 

Hoss Man commented on SOLR-7878:


Again: I'm not that familar with the code (and again: i feel like these 
questions/conversation belongs in SOLR-9989 since it's specific to JSON 
Faceting) so if you think a wrapper that emulates SorterSetDocValues is the 
best approach then go for it -- i was just surprised given that seems like it 
would involve a lot of redundent LegacyNumeric conversion.

but on the flip side: if it seems like the most straight forward approach, then 
we can always go with that for now and try to optimize later.

> Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric 
> fields)
> --
>
> Key: SOLR-7878
> URL: https://issues.apache.org/jira/browse/SOLR-7878
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: David Smiley
>
> Lucene has a SortedNumericDocValues (i.e. multi-valued numeric DocValues), 
> ever since late in the 4x versions.  Solr's TrieField.createFields 
> unfortunately still uses SortedSetDocValues for the multi-valued case.  
> SortedNumericDocValues is more efficient than SortedSetDocValues; for example 
> there is no 'ordinal' mapping for sorting/faceting needed.  
> Unfortunately, updating Solr here would be quite a bit of work, since there 
> are backwards-compatibility concerns, and faceting code would need a new code 
> path implementation just for this.  Sorting is relatively simple thanks to 
> SortedNumericSortField, and today multi-valued sorting isn't directly 
> possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (32bit/jdk-9-ea+173) - Build # 6649 - Still Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6649/
Java: 32bit/jdk-9-ea+173 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "overlay":{ "znodeVersion":0, 
"runtimeLib":{"colltest":{ "name":"colltest", "version":1,  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":0,
"runtimeLib":{"colltest":{
"name":"colltest",
"version":1,  from server:  null
at 
__randomizedtesting.SeedInfo.seed([87D489100C013851:5F99A447FBDC9DF1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:97)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 1864 - Unstable

2017-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1864/

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "overlay":{ "znodeVersion":0, 
"runtimeLib":{"colltest":{ "name":"colltest", "version":1,  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":0,
"runtimeLib":{"colltest":{
"name":"colltest",
"version":1,  from server:  null
at 
__randomizedtesting.SeedInfo.seed([C3406D32319C9C27:1B0D4065C6413987]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-7878) Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric fields)

2017-06-14 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049925#comment-16049925
 ] 

Cao Manh Dat edited comment on SOLR-7878 at 6/15/17 3:23 AM:
-

[~hossman] I am thinking about that direction too. But "terms" in json facet 
kinda relates too TermsEnum ( the values must be global sorted ), right? But 
SortedNumericDV is a mapping from docId -> list of values. It do not have 
reverse direction.


was (Author: caomanhdat):
[~hossman] I am thinking about that direction too. But "terms" in json facet 
kinda relates too TermsEnum, right? But SortedNumericDV is a mapping from docId 
-> list of values. It do not have reverse direction.

> Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric 
> fields)
> --
>
> Key: SOLR-7878
> URL: https://issues.apache.org/jira/browse/SOLR-7878
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: David Smiley
>
> Lucene has a SortedNumericDocValues (i.e. multi-valued numeric DocValues), 
> ever since late in the 4x versions.  Solr's TrieField.createFields 
> unfortunately still uses SortedSetDocValues for the multi-valued case.  
> SortedNumericDocValues is more efficient than SortedSetDocValues; for example 
> there is no 'ordinal' mapping for sorting/faceting needed.  
> Unfortunately, updating Solr here would be quite a bit of work, since there 
> are backwards-compatibility concerns, and faceting code would need a new code 
> path implementation just for this.  Sorting is relatively simple thanks to 
> SortedNumericSortField, and today multi-valued sorting isn't directly 
> possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049926#comment-16049926
 ] 

Ishan Chattopadhyaya commented on SOLR-10574:
-

Alexandre, Erick, should we spin off the edismax based searching on all fields 
in a separate issue and tackle as a follow up to this issue (after the patch 
here is committed)?

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch, 
> SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7878) Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric fields)

2017-06-14 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049925#comment-16049925
 ] 

Cao Manh Dat commented on SOLR-7878:


[~hossman] I am thinking about that direction too. But "terms" in json facet 
kinda relates too TermsEnum, right? But SortedNumericDV is a mapping from docId 
-> list of values. It do not have reverse direction.

> Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric 
> fields)
> --
>
> Key: SOLR-7878
> URL: https://issues.apache.org/jira/browse/SOLR-7878
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: David Smiley
>
> Lucene has a SortedNumericDocValues (i.e. multi-valued numeric DocValues), 
> ever since late in the 4x versions.  Solr's TrieField.createFields 
> unfortunately still uses SortedSetDocValues for the multi-valued case.  
> SortedNumericDocValues is more efficient than SortedSetDocValues; for example 
> there is no 'ordinal' mapping for sorting/faceting needed.  
> Unfortunately, updating Solr here would be quite a bit of work, since there 
> are backwards-compatibility concerns, and faceting code would need a new code 
> path implementation just for this.  Sorting is relatively simple thanks to 
> SortedNumericSortField, and today multi-valued sorting isn't directly 
> possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10574:

Attachment: SOLR-10574.patch

Adding patch for the proposed changes (for easy reviewing, adding on top of 
basic_confs).
I'm also working on a committable patch (that contains the script changes for 
this). That patch would remove both data_driven_schema_configs and basic_confs.
As suggested by David, I'll try to commit it in two commits for better history 
reviewing.

Note: this patch does not contain the managed-schema -> managed-schema.xml 
changes. It can be dealt with separately in SOLR-10887.

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch, 
> SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10833) Numeric FooPointField classes inconsistent with TrieFooFields on malformed input: throw NumberFormatException

2017-06-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-10833.
--
   Resolution: Fixed
Fix Version/s: 6.7
   master (7.0)

Resolving this issue now. It can be backported to branch_6_6 if we decide to 
release a 6.6.1

> Numeric FooPointField classes inconsistent with TrieFooFields on malformed 
> input: throw NumberFormatException
> -
>
> Key: SOLR-10833
> URL: https://issues.apache.org/jira/browse/SOLR-10833
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Tomás Fernández Löbbe
> Fix For: master (7.0), 6.7
>
> Attachments: SOLR-10833.patch, SOLR-10833.patch, SOLR-10833.patch, 
> SOLR-10833.patch
>
>
> Trie based Numeic fields deal with bad input by wrapping 
> NumberFormatExceptions in w/BAD_REQUEST SolrExceptions -- PointFields are 
> just allowing the NumberFormatExceptions to be thrown as is, which means they 
> propagate up and are eventually treated as a SERVER_ERROR when responding to 
> clients.
> This is not only inconsistent from an end user perspective -- but also breaks 
> how these errors are handled in SolrCloud when the requests have been 
> forwarded/proxied.
> We should ensure that the FooPointField classes behave consistently with the 
> TrieFooField classes on bad input (both when adding a document, or query 
> creating a field query)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1368 - Still Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1368/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([F7F6570588590E18:6D022AE716C39224]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:890)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:270)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:883)
... 40 more


FAILED:  org.apache.solr.update.HardAutoCommitTest.testCommitWithin

Error Message:
Exception during 

[jira] [Commented] (SOLR-10760) Remove trie field types and fields from example schemas

2017-06-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049889#comment-16049889
 ] 

Steve Rowe commented on SOLR-10760:
---

bq. 1. Does PointType need Trie* fields?

No, PointType is tested as working with points by PolyFieldTest.  All PointType 
fieldtypes in example schemas use the dynamic field suffix "_d" to use for 
sub-fields, and this is a DoublePointField in all example schemas.


> Remove trie field types and fields from example schemas
> ---
>
> Key: SOLR-10760
> URL: https://issues.apache.org/jira/browse/SOLR-10760
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10760.patch
>
>
> In order to make points fields the default, we should remove all trie field 
> types and fields from example schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10760) Remove trie field types and fields from example schemas

2017-06-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048520#comment-16048520
 ] 

Steve Rowe edited comment on SOLR-10760 at 6/15/17 1:16 AM:


WIP patch.  No testing done yet, but except for the TODOs below, I think it's 
complete.

TODO:

# Does PointType need Trie* fields?
# CurrencyField uses a Trie* field, so this issue is blocked by SOLR-10503 
(I'll link it that way in a sec)
# Does BBoxField need Trie* fields?


was (Author: steve_rowe):
WIP patch.  No testing done yet, but except for the TODOs below, I think it's 
complete.

TODO:

# Does PointField need Trie* fields?
# CurrencyField uses a Trie* field, so this issue is blocked by SOLR-10503 
(I'll link it that way in a sec)
# Does BBoxField need Trie* fields?

> Remove trie field types and fields from example schemas
> ---
>
> Key: SOLR-10760
> URL: https://issues.apache.org/jira/browse/SOLR-10760
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10760.patch
>
>
> In order to make points fields the default, we should remove all trie field 
> types and fields from example schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1325 - Still unstable

2017-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1325/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestTlogReplica

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, RawDirectoryWrapper, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:361)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:721)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:948)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:855)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:979)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:914)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:178)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:747)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:728)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:509)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:318)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1033)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:855)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:979)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:914)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:178)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:747)  
at 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 934 - Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/934/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

11 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
Task 3001 did not complete, final state: FAILED expected same: was 
not:

Stack Trace:
java.lang.AssertionError: Task 3001 did not complete, final state: FAILED 
expected same: was not:
at 
__randomizedtesting.SeedInfo.seed([3FDC2CC0EBBFB023:B788131A4543DDDB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotSame(Assert.java:641)
at org.junit.Assert.assertSame(Assert.java:580)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testDeduplicationOfSubmittedTasks(MultiThreadedOCPTest.java:250)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049843#comment-16049843
 ] 

Robert Muir commented on LUCENE-7879:
-

Your build path is not configured correctly (which that command should do). It 
ensures {{lucene/test-framework/src,properties,test}} is in the build path. 
That contains things tests depend on.

> Cannot run Junit Test. 
> ---
>
> Key: LUCENE-7879
> URL: https://issues.apache.org/jira/browse/LUCENE-7879
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Affects Versions: 6.5.1
> Environment: MacOS 
>Reporter: Jaewoo Kim
>Priority: Minor
> Fix For: master (7.0)
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> We built lucene using ant, opened the project on eclipse and ran Junit tests. 
> Some tests are successful but alot of the tests report an exception. For 
> example, running Junit test on TestIndexWriter.java produces
> * "An SPI class of type org.apache.lucene.codecs.Codec with name 'Asserting' 
> does not exist.  You need to add the corresponding JAR file supporting this 
> SPI to your classpath.  The current classpath supports the following names: 
> [Lucene60, Lucene62, SimpleText, Lucene70]".*
>   
>  Does anyone have a good idea about why most of the unit tests are not 
> successful for us on eclipse? 
> The whole error message in console is: 
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> jun 14, 2017 6:59:22 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> ADVERTENCIA: Uncaught exception in thread: 
> Thread[Thread-102,5,TGRP-TestIndexWriter]
> java.lang.IllegalArgumentException: An SPI class of type 
> org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
> need to add the corresponding JAR file supporting this SPI to your classpath. 
>  The current classpath supports the following names: [Lucene60, Lucene62, 
> SimpleText, Lucene70]
>   at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
>   at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
>   at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>   at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
>   at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
>   at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
>   at 
> org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> 

[jira] [Commented] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Jintao Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049818#comment-16049818
 ] 

Jintao Jiang commented on LUCENE-7879:
--

Have the same issue here.

> Cannot run Junit Test. 
> ---
>
> Key: LUCENE-7879
> URL: https://issues.apache.org/jira/browse/LUCENE-7879
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Affects Versions: 6.5.1
> Environment: MacOS 
>Reporter: Jaewoo Kim
>Priority: Minor
> Fix For: master (7.0)
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> We built lucene using ant, opened the project on eclipse and ran Junit tests. 
> Some tests are successful but alot of the tests report an exception. For 
> example, running Junit test on TestIndexWriter.java produces
> * "An SPI class of type org.apache.lucene.codecs.Codec with name 'Asserting' 
> does not exist.  You need to add the corresponding JAR file supporting this 
> SPI to your classpath.  The current classpath supports the following names: 
> [Lucene60, Lucene62, SimpleText, Lucene70]".*
>   
>  Does anyone have a good idea about why most of the unit tests are not 
> successful for us on eclipse? 
> The whole error message in console is: 
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> jun 14, 2017 6:59:22 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> ADVERTENCIA: Uncaught exception in thread: 
> Thread[Thread-102,5,TGRP-TestIndexWriter]
> java.lang.IllegalArgumentException: An SPI class of type 
> org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
> need to add the corresponding JAR file supporting this SPI to your classpath. 
>  The current classpath supports the following names: [Lucene60, Lucene62, 
> SimpleText, Lucene70]
>   at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
>   at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
>   at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>   at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
>   at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
>   at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
>   at 
> org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce 

[jira] [Reopened] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Jaewoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaewoo Kim reopened LUCENE-7879:


Still not working 

> Cannot run Junit Test. 
> ---
>
> Key: LUCENE-7879
> URL: https://issues.apache.org/jira/browse/LUCENE-7879
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Affects Versions: 6.5.1
> Environment: MacOS 
>Reporter: Jaewoo Kim
>Priority: Minor
> Fix For: master (7.0)
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> We built lucene using ant, opened the project on eclipse and ran Junit tests. 
> Some tests are successful but alot of the tests report an exception. For 
> example, running Junit test on TestIndexWriter.java produces
> * "An SPI class of type org.apache.lucene.codecs.Codec with name 'Asserting' 
> does not exist.  You need to add the corresponding JAR file supporting this 
> SPI to your classpath.  The current classpath supports the following names: 
> [Lucene60, Lucene62, SimpleText, Lucene70]".*
>   
>  Does anyone have a good idea about why most of the unit tests are not 
> successful for us on eclipse? 
> The whole error message in console is: 
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> jun 14, 2017 6:59:22 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> ADVERTENCIA: Uncaught exception in thread: 
> Thread[Thread-102,5,TGRP-TestIndexWriter]
> java.lang.IllegalArgumentException: An SPI class of type 
> org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
> need to add the corresponding JAR file supporting this SPI to your classpath. 
>  The current classpath supports the following names: [Lucene60, Lucene62, 
> SimpleText, Lucene70]
>   at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
>   at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
>   at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>   at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
>   at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
>   at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
>   at 
> org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> 

[jira] [Commented] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Jaewoo Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049809#comment-16049809
 ] 

Jaewoo Kim commented on LUCENE-7879:


Thanks for the reply. So I actually did that before and I ran the command 
again. I see that it succeeds with this echo: SUCCESS: You must right-click 
your project and choose Refresh.

However, the same Junit test fail is happening. So in theory, since I just 
forked and built lucene without changing it, all unit tests should work right? 

> Cannot run Junit Test. 
> ---
>
> Key: LUCENE-7879
> URL: https://issues.apache.org/jira/browse/LUCENE-7879
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Affects Versions: 6.5.1
> Environment: MacOS 
>Reporter: Jaewoo Kim
>Priority: Minor
> Fix For: master (7.0)
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> We built lucene using ant, opened the project on eclipse and ran Junit tests. 
> Some tests are successful but alot of the tests report an exception. For 
> example, running Junit test on TestIndexWriter.java produces
> * "An SPI class of type org.apache.lucene.codecs.Codec with name 'Asserting' 
> does not exist.  You need to add the corresponding JAR file supporting this 
> SPI to your classpath.  The current classpath supports the following names: 
> [Lucene60, Lucene62, SimpleText, Lucene70]".*
>   
>  Does anyone have a good idea about why most of the unit tests are not 
> successful for us on eclipse? 
> The whole error message in console is: 
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> jun 14, 2017 6:59:22 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> ADVERTENCIA: Uncaught exception in thread: 
> Thread[Thread-102,5,TGRP-TestIndexWriter]
> java.lang.IllegalArgumentException: An SPI class of type 
> org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
> need to add the corresponding JAR file supporting this SPI to your classpath. 
>  The current classpath supports the following names: [Lucene60, Lucene62, 
> SimpleText, Lucene70]
>   at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
>   at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
>   at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>   at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
>   at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
>   at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
>   at 
> org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4074 - Still Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4074/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([57AF26AA3109442E:35C2D8EBFE872410]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at 
org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter(MetricsHandlerTest.java:201)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12798 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.MetricsHandlerTest
   [junit4]   2> Creating dataDir: 

[jira] [Resolved] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7879.
-
Resolution: Not A Problem

> Cannot run Junit Test. 
> ---
>
> Key: LUCENE-7879
> URL: https://issues.apache.org/jira/browse/LUCENE-7879
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Affects Versions: 6.5.1
> Environment: MacOS 
>Reporter: Jaewoo Kim
>Priority: Minor
> Fix For: master (7.0)
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> We built lucene using ant, opened the project on eclipse and ran Junit tests. 
> Some tests are successful but alot of the tests report an exception. For 
> example, running Junit test on TestIndexWriter.java produces
> * "An SPI class of type org.apache.lucene.codecs.Codec with name 'Asserting' 
> does not exist.  You need to add the corresponding JAR file supporting this 
> SPI to your classpath.  The current classpath supports the following names: 
> [Lucene60, Lucene62, SimpleText, Lucene70]".*
>   
>  Does anyone have a good idea about why most of the unit tests are not 
> successful for us on eclipse? 
> The whole error message in console is: 
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> jun 14, 2017 6:59:22 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> ADVERTENCIA: Uncaught exception in thread: 
> Thread[Thread-102,5,TGRP-TestIndexWriter]
> java.lang.IllegalArgumentException: An SPI class of type 
> org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
> need to add the corresponding JAR file supporting this SPI to your classpath. 
>  The current classpath supports the following names: [Lucene60, Lucene62, 
> SimpleText, Lucene70]
>   at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
>   at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
>   at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>   at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
>   at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
>   at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
>   at 
> org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  

[jira] [Commented] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049797#comment-16049797
 ] 

Robert Muir commented on LUCENE-7879:
-

you need to run {{ant eclipse}} from the top-level checkout so that you have a 
proper eclipse configuration. refresh your project after that. 

> Cannot run Junit Test. 
> ---
>
> Key: LUCENE-7879
> URL: https://issues.apache.org/jira/browse/LUCENE-7879
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Affects Versions: 6.5.1
> Environment: MacOS 
>Reporter: Jaewoo Kim
>Priority: Minor
> Fix For: master (7.0)
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> We built lucene using ant, opened the project on eclipse and ran Junit tests. 
> Some tests are successful but alot of the tests report an exception. For 
> example, running Junit test on TestIndexWriter.java produces
> * "An SPI class of type org.apache.lucene.codecs.Codec with name 'Asserting' 
> does not exist.  You need to add the corresponding JAR file supporting this 
> SPI to your classpath.  The current classpath supports the following names: 
> [Lucene60, Lucene62, SimpleText, Lucene70]".*
>   
>  Does anyone have a good idea about why most of the unit tests are not 
> successful for us on eclipse? 
> The whole error message in console is: 
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> jun 14, 2017 6:59:22 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> ADVERTENCIA: Uncaught exception in thread: 
> Thread[Thread-102,5,TGRP-TestIndexWriter]
> java.lang.IllegalArgumentException: An SPI class of type 
> org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
> need to add the corresponding JAR file supporting this SPI to your classpath. 
>  The current classpath supports the following names: [Lucene60, Lucene62, 
> SimpleText, Lucene70]
>   at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
>   at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
>   at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>   at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
>   at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
>   at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
>   at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
>   at 
> org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
> -Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
> -Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 

[jira] [Commented] (SOLR-9177) Support oom hook when running Solr in foreground mode

2017-06-14 Thread Chris Haumesser (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049792#comment-16049792
 ] 

Chris Haumesser commented on SOLR-9177:
---

With systemd eating the world, more and more people are going to be running 
solr in the foreground. As there is potential data corruption that can arise 
from this bug, and it is a simple fix, it would be great if someone could take 
a look at this. 

> Support oom hook when running Solr in foreground mode
> -
>
> Key: SOLR-9177
> URL: https://issues.apache.org/jira/browse/SOLR-9177
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>
> After reading through the comments on SOLR-8145 and from my own experience, 
> seems like a reasonable number of people run Solr in foreground mode in 
> production.
> To give some more context, I've seen Solr hit OOM, which leads to IW being 
> closed by Lucene. The Solr process hangs in there and without the oom killer, 
> while all queries continue to work, all update requests start failing.
> I think it makes sense to add support to the bin/solr script to add the oom 
> hook when running in fg mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Jaewoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaewoo Kim updated LUCENE-7879:
---
Description: 
We built lucene using ant, opened the project on eclipse and ran Junit tests. 
Some tests are successful but alot of the tests report an exception. For 
example, running Junit test on TestIndexWriter.java produces

* "An SPI class of type org.apache.lucene.codecs.Codec with name 'Asserting' 
does not exist.  You need to add the corresponding JAR file supporting this SPI 
to your classpath.  The current classpath supports the following names: 
[Lucene60, Lucene62, SimpleText, Lucene70]".*
  
 Does anyone have a good idea about why most of the unit tests are not 
successful for us on eclipse? 

The whole error message in console is: 

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
jun 14, 2017 6:59:22 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
ADVERTENCIA: Uncaught exception in thread: 
Thread[Thread-102,5,TGRP-TestIndexWriter]
java.lang.IllegalArgumentException: An SPI class of type 
org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You need 
to add the corresponding JAR file supporting this SPI to your classpath.  The 
current classpath supports the following names: [Lucene60, Lucene62, 
SimpleText, Lucene70]
at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
at 
org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDontInvokeAnalyzerForUnAnalyzedFields 
-Dtests.seed=74868B172C079B1F -Dtests.locale=es-EC -Dtests.timezone=PRT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testIndexStoreCombos -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyFieldNameTerms -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 

[jira] [Updated] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Jaewoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaewoo Kim updated LUCENE-7879:
---
Description: 
We built lucene using ant, opened the project on eclipse and ran Junit tests. 
Some tests are successful but alot of the tests report an exception. For 
example, running Junit test on TestIndexWriter.java produces
* "An SPI class of type org.apache.lucene.codecs.Codec with name 'Asserting' 
does not exist.  You need to add the corresponding JAR file supporting this SPI 
to your classpath.  The current classpath supports the following names: 
[Lucene60, Lucene62, SimpleText, Lucene70]".* 
* 
*  Does anyone have a good idea about why most of the unit tests are not 
successful for us on eclipse? 

The whole error message in console is: 

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
jun 14, 2017 6:59:22 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
ADVERTENCIA: Uncaught exception in thread: 
Thread[Thread-102,5,TGRP-TestIndexWriter]
java.lang.IllegalArgumentException: An SPI class of type 
org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You need 
to add the corresponding JAR file supporting this SPI to your classpath.  The 
current classpath supports the following names: [Lucene60, Lucene62, 
SimpleText, Lucene70]
at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
at 
org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDontInvokeAnalyzerForUnAnalyzedFields 
-Dtests.seed=74868B172C079B1F -Dtests.locale=es-EC -Dtests.timezone=PRT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testIndexStoreCombos -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyFieldNameTerms -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 

[jira] [Updated] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Jaewoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaewoo Kim updated LUCENE-7879:
---
Description: 
We built lucene using ant, opened the project on eclipse and ran Junit tests. 
Some tests are successful but alot of the tests report an exception. For 
example, running Junit test on TestIndexWriter.java produces* "An SPI class of 
type org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
need to add the corresponding JAR file supporting this SPI to your classpath.  
The current classpath supports the following names: [Lucene60, Lucene62, 
SimpleText, Lucene70]".*  Does anyone have a good idea about why most of the 
unit tests are not successful for us on eclipse? 

The whole error message in console is: 

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
jun 14, 2017 6:59:22 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
ADVERTENCIA: Uncaught exception in thread: 
Thread[Thread-102,5,TGRP-TestIndexWriter]
java.lang.IllegalArgumentException: An SPI class of type 
org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You need 
to add the corresponding JAR file supporting this SPI to your classpath.  The 
current classpath supports the following names: [Lucene60, Lucene62, 
SimpleText, Lucene70]
at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
at 
org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDontInvokeAnalyzerForUnAnalyzedFields 
-Dtests.seed=74868B172C079B1F -Dtests.locale=es-EC -Dtests.timezone=PRT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testIndexStoreCombos -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyFieldNameTerms -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 

[jira] [Updated] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Jaewoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaewoo Kim updated LUCENE-7879:
---
Description: 
We built lucene using ant, opened the project on eclipse and ran Junit tests. 
Some tests are successful but alot of the tests report an exception. For 
example, running Junit test on TestIndexWriter.java produces "An SPI class of 
type org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
need to add the corresponding JAR file supporting this SPI to your classpath.  
The current classpath supports the following names: [Lucene60, Lucene62, 
SimpleText, Lucene70]".  Does anyone have a good idea about why most of the 
unit tests are not successful for us on eclipse? 

The whole error message in console is: 

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
jun 14, 2017 6:59:22 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
ADVERTENCIA: Uncaught exception in thread: 
Thread[Thread-102,5,TGRP-TestIndexWriter]
java.lang.IllegalArgumentException: An SPI class of type 
org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You need 
to add the corresponding JAR file supporting this SPI to your classpath.  The 
current classpath supports the following names: [Lucene60, Lucene62, 
SimpleText, Lucene70]
at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
at 
org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDontInvokeAnalyzerForUnAnalyzedFields 
-Dtests.seed=74868B172C079B1F -Dtests.locale=es-EC -Dtests.timezone=PRT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testIndexStoreCombos -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyFieldNameTerms -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 

[jira] [Commented] (SOLR-10874) FloatPayloadValueSource throws assertion error if debug=true

2017-06-14 Thread Michael Kosten (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049777#comment-16049777
 ] 

Michael Kosten commented on SOLR-10874:
---

You're not seeing it in the wild because assertions aren't enabled. I tried 
running solr in the wild with assertions enabled (which I don't think anyone 
would do normally), but it failed due to another assertion, so I can't get it 
to the point of executing a query with the payload function. However, when you 
do run in in the wild, the explain output is wrong, it shows 0.0 for the 
results of the payload function. You can see it in your comment above for doc 
1: "product of:\n  0.0 = payload(vals_dpf,one,const(0.0))=0.0".

I traced what happens when debug is true. The floatVal is called 3 times 
sequentially for each doc in the explain output.





> FloatPayloadValueSource throws assertion error if debug=true
> 
>
> Key: SOLR-10874
> URL: https://issues.apache.org/jira/browse/SOLR-10874
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Michael Kosten
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10874.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Using the new payload function will fail with an assertion error if the debug 
> parameter is included in the query. This is caused by the floatValue method 
> in FloatPayloadValueSource being called for the same doc id twice in a row.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7879) Cannot run Junit Test.

2017-06-14 Thread Jaewoo Kim (JIRA)
Jaewoo Kim created LUCENE-7879:
--

 Summary: Cannot run Junit Test. 
 Key: LUCENE-7879
 URL: https://issues.apache.org/jira/browse/LUCENE-7879
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Affects Versions: 6.5.1
 Environment: MacOS 
Reporter: Jaewoo Kim
Priority: Minor
 Fix For: master (7.0)


We are built lucene using ant, opened the project on eclipse and ran Junit 
tests. Some tests are sucessful but alot of the tests report an exception. For 
example, running Junit test on TestIndexWriter.java produces "An SPI class of 
type org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You 
need to add the corresponding JAR file supporting this SPI to your classpath.  
The current classpath supports the following names: [Lucene60, Lucene62, 
SimpleText, Lucene70]".   

The whole error message in console is: 

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testNRTAfterCommit -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testManySeparateThreads -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testCloseWhileMergeIsRunning -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testChangesAfterClose -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testUnlimitedMaxFieldLength -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testStopwordsPosIncHole2 -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
jun 14, 2017 6:59:22 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
ADVERTENCIA: Uncaught exception in thread: 
Thread[Thread-102,5,TGRP-TestIndexWriter]
java.lang.IllegalArgumentException: An SPI class of type 
org.apache.lucene.codecs.Codec with name 'Asserting' does not exist.  You need 
to add the corresponding JAR file supporting this SPI to your classpath.  The 
current classpath supports the following names: [Lucene60, Lucene62, 
SimpleText, Lucene70]
at __randomizedtesting.SeedInfo.seed([74868B172C079B1F]:0)
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:419)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:358)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:529)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
at 
org.apache.lucene.index.TestIndexWriter$IndexerThreadInterrupt.run(TestIndexWriter.java:1104)

NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testThreadInterruptDeadlock -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDoBeforeAfterFlush -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testEmptyDocAfterFlushingRealDoc -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testDontInvokeAnalyzerForUnAnalyzedFields 
-Dtests.seed=74868B172C079B1F -Dtests.locale=es-EC -Dtests.timezone=PRT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testIndexStoreCombos -Dtests.seed=74868B172C079B1F 
-Dtests.locale=es-EC -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
NOTE: reproduce with: ant test  

[jira] [Created] (SOLR-10892) Ref Guide: Move parameters out of tables

2017-06-14 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-10892:


 Summary: Ref Guide: Move parameters out of tables
 Key: SOLR-10892
 URL: https://issues.apache.org/jira/browse/SOLR-10892
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Cassandra Targett
Assignee: Cassandra Targett
 Fix For: master (7.0)


We've overused a bit the concept of using tables to explain various config 
parameters. We have some tables that are massive and try to cram a ton of 
complex information into a row (see function-queries.adoc), while other tables 
are only 1 or 2 rows. It's not impossible, but it's also difficult to link 
directly to parameters when they are in a table

AsciiDoc format now allows us to use "description lists" or "definition lists" 
which in many cases might be better. This issue would change many of the 
current tables to definition lists. However, some of them may remain, depending 
on how they work within the content.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10832) Using "indexed" PointField for _version_ breaks VersionInfo.getMaxVersionFromIndex

2017-06-14 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10832:

Attachment: SOLR-10832.patch

hacked up patch showing that {{PointValues.getMaxPackedValue}} gets the test 
working.

Still need to figure out if it's worth while to try and abstract some of this 
decoding down into the FieldType.

> Using "indexed" PointField for _version_ breaks 
> VersionInfo.getMaxVersionFromIndex
> --
>
> Key: SOLR-10832
> URL: https://issues.apache.org/jira/browse/SOLR-10832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-10832.patch, SOLR-10832.patch
>
>
> If someone configures {{\_version_}} using a {{LongPointField}} which is 
> {{indexed="true"}} then {{VersionInfo.getMaxVersionFromIndex()}} will 
> incorrectly assume...
> {code}
> // if indexed, then we have terms to get the max from
> if (versionField.indexed()) {
>   LeafReader leafReader = 
> SlowCompositeReaderWrapper.wrap(searcher.getIndexReader());
>   Terms versionTerms = leafReader.terms(versionFieldName);
>   Long max = (versionTerms != null) ? 
> LegacyNumericUtils.getMaxLong(versionTerms) : null;
> {code}
> ...which will not work because Point based fields have no Terms.
> potential work around: configuring {{\_version_}} to use {{indexed="false" 
> docValues="true"}} should cause this branch to be skipped and the existing 
> ValueSource/DocValues based fallback to be used.
> We should either:
> * figure out if an alternative option exists for determining the "max" value 
> of a LongPointField, and if so use that if {{versionField.indexed() && 
> versionField.getType().isPointField()}}
> * change {{VersionInfo.getAndCheckVersionField()}} to check if the version 
> field {{IsPointField()}} and if so error unless {{indexed="false" && 
> docValues="true"}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10824) java.lang.NullPointerException ExactSharedStatsCache.getPerShardTermStats

2017-06-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049707#comment-16049707
 ] 

Mikhail Khludnev commented on SOLR-10824:
-

Right, [~erickerickson]. This is exactly the case. id query falls off  

> java.lang.NullPointerException ExactSharedStatsCache.getPerShardTermStats 
> --
>
> Key: SOLR-10824
> URL: https://issues.apache.org/jira/browse/SOLR-10824
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Mikhail Khludnev
>Priority: Minor
>
> {quote}
>  INFO [qtp411311908-32] (SolrCore.java:2304) - [collection1_shard3_replica2]  
> webapp=/solr path=/select 
> params={..=false&_stateVer_=collection1:5...=32768=http://127.0.0.1:57114/solr/collection1_shard3_replica2/|http://127.0.0.1:57112/solr/collection1_shard3_replica1/=2)=1496751847089=true=FRM=javabin}
>  status=0 QTime=18
> INFO [qtp2123780104-30] (SolrCore.java:2304) - [collection1_shard1_replica1] 
> ...
>  INFO [qtp411311908-45] (SolrCore.java:2304) - [collection1_shard2_replica1]  
> ...
> ERROR [qtp411311908-33] (SolrException.java:148) - 
> java.lang.NullPointerException
>   at 
> org.apache.solr.search.stats.ExactSharedStatsCache.getPerShardTermStats(ExactSharedStatsCache.java:76)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.sendGlobalStats(ExactStatsCache.java:233)
>   at 
> org.apache.solr.handler.component.QueryComponent.createMainQuery(QueryComponent.java:930)
>   at 
> org.apache.solr.handler.component.QueryComponent.regularDistributedProcess(QueryComponent.java:726)
>   at 
> org.apache.solr.handler.component.QueryComponent.distributedProcess(QueryComponent.java:679)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:345)
>  INFO [qtp411311908-33] (SolrCore.java:2304) - [collection1_shard3_replica2]  
> webapp=/solr path=/select params={...=javabin=2} status=500 
> QTime=82
> {quote}
> Switching to {{LRUStatsCache}} seems help.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9177) Support oom hook when running Solr in foreground mode

2017-06-14 Thread Simon Tower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049695#comment-16049695
 ] 

Simon Tower commented on SOLR-9177:
---

I accidentally hosed our entire cluster when I queried for too many rows. 
Having this option on by default when running in the foreground would have 
saved me a lot of trouble today. One of the shard indexes became corrupt, 
forcing us to delete all of the collections, recreate them, and reindex.

> Support oom hook when running Solr in foreground mode
> -
>
> Key: SOLR-9177
> URL: https://issues.apache.org/jira/browse/SOLR-9177
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>
> After reading through the comments on SOLR-8145 and from my own experience, 
> seems like a reasonable number of people run Solr in foreground mode in 
> production.
> To give some more context, I've seen Solr hit OOM, which leads to IW being 
> closed by Lucene. The Solr process hangs in there and without the oom killer, 
> while all queries continue to work, all update requests start failing.
> I think it makes sense to add support to the bin/solr script to add the oom 
> hook when running in fg mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10760) Remove trie field types and fields from example schemas

2017-06-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049686#comment-16049686
 ] 

Steve Rowe commented on SOLR-10760:
---

bq. 3. Does BBoxField need Trie* fields?

Yes it does: SOLR-10891

> Remove trie field types and fields from example schemas
> ---
>
> Key: SOLR-10760
> URL: https://issues.apache.org/jira/browse/SOLR-10760
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10760.patch
>
>
> In order to make points fields the default, we should remove all trie field 
> types and fields from example schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10824) java.lang.NullPointerException ExactSharedStatsCache.getPerShardTermStats

2017-06-14 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049687#comment-16049687
 ] 

Erick Erickson commented on SOLR-10824:
---

Is this related to the speculation on the mailing lists that another 
prerequisite is that some of the shards do _not_ happen to have documents on 
them?

Or perhaps is it a problem if the query has zero hits on a particular shard? If 
this latter it should be pretty easy to reproduce by firing off, say, a query 
on a .

> java.lang.NullPointerException ExactSharedStatsCache.getPerShardTermStats 
> --
>
> Key: SOLR-10824
> URL: https://issues.apache.org/jira/browse/SOLR-10824
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Mikhail Khludnev
>Priority: Minor
>
> {quote}
>  INFO [qtp411311908-32] (SolrCore.java:2304) - [collection1_shard3_replica2]  
> webapp=/solr path=/select 
> params={..=false&_stateVer_=collection1:5...=32768=http://127.0.0.1:57114/solr/collection1_shard3_replica2/|http://127.0.0.1:57112/solr/collection1_shard3_replica1/=2)=1496751847089=true=FRM=javabin}
>  status=0 QTime=18
> INFO [qtp2123780104-30] (SolrCore.java:2304) - [collection1_shard1_replica1] 
> ...
>  INFO [qtp411311908-45] (SolrCore.java:2304) - [collection1_shard2_replica1]  
> ...
> ERROR [qtp411311908-33] (SolrException.java:148) - 
> java.lang.NullPointerException
>   at 
> org.apache.solr.search.stats.ExactSharedStatsCache.getPerShardTermStats(ExactSharedStatsCache.java:76)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.sendGlobalStats(ExactStatsCache.java:233)
>   at 
> org.apache.solr.handler.component.QueryComponent.createMainQuery(QueryComponent.java:930)
>   at 
> org.apache.solr.handler.component.QueryComponent.regularDistributedProcess(QueryComponent.java:726)
>   at 
> org.apache.solr.handler.component.QueryComponent.distributedProcess(QueryComponent.java:679)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:345)
>  INFO [qtp411311908-33] (SolrCore.java:2304) - [collection1_shard3_replica2]  
> webapp=/solr path=/select params={...=javabin=2} status=500 
> QTime=82
> {quote}
> Switching to {{LRUStatsCache}} seems help.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10891) BBoxField does not support point-based number sub-fields

2017-06-14 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10891:
--
Issue Type: Sub-task  (was: Bug)
Parent: SOLR-9995

> BBoxField does not support point-based number sub-fields
> 
>
> Key: SOLR-10891
> URL: https://issues.apache.org/jira/browse/SOLR-10891
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10891.patch, tests-failures.txt
>
>
> I noticed while removing Trie fields from example schemas on SOLR-10760 that 
> BBoxField uses Trie fields in at least one example schema.
> I went looking and there is theoretical machinery to support points, but when 
> I added a point-based bbox variant to TestSolr4Spatial, I get test failures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7863) Don't repeat postings and positions on ReverseWF, EdgeNGram, etc

2017-06-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033650#comment-16033650
 ] 

Mikhail Khludnev edited comment on LUCENE-7863 at 6/14/17 9:26 PM:
---

Let's index six one-word docs:
|foo|
|foo|
|foo|
|bar|
|bar|
|bar|

h3. Inverted index with ReversedWildcardFilter

|term|posting offset (relative)|
|1oof|0|
|1rab|3|
|bar|3| 
|foo|3|

|Postings (absolute values)|
|0,1,2|
|3,4,5|
|3,4,5|
|0,1,2|

Here you see that postings (and positions) are duplicated for every derived 
term.

h2. Proposal: DRY

|term|posting offset (relative)|
|1oof|0|
|1rab|3|
|bar|-3| 
|foo|-3|

|Postings (absolute values)|
|0,1,2|
|3,4,5|

h2. Note
It seems like it's really challenging to implement, giving that codecs doesn't 
allow such tweaking, I had to change {{o.a.l.i}} classes. This code introduces 
the relation between terms see {{FreqProxTermsEnum.getTwinTerm()}} and so one 
(it's one of the ugliest pieces). It also requires to change the term block 
format: posting offsets are written in ZLong (instead of Vlong), since they 
need to be negative. I'm afraid it ruins a lot of tests since I was interested 
in the only one {{TestReversedWildcardFilterFactory}}. It passes. I also 
experiment with 5M enwiki and it seems roughly works: RWF blows index from 13G 
to 28G and this code keeps it at 17G and runs *leading queries fast.
It aims only {{RWF}} where the derived term is 1-1 to the origin one. This 
patch for branch_6x.

h2. Disclaimer
Current patch is mad and dirty ({{trickedFields = Arrays.asList("one", 
"body_txt_en")}}, and plenty of {{sysout}} ), I've just scratched the idea. 

h2. TODO
- How to carry relation between origin and derived NGramm terms (1 - Many)? 
- How to adjust the current {{o.a.l.i}} to bring reduplicated postings to the 
codec?

h2. The next idea
For \*infix\* searches it needs to derive the following terms (for three 
{{bar}} docs and three {{baz}} docs):
|term|position offset|
|ar_bar|0|
|az_baz|3|
|bar|-3|
|baz|3
|r_bar|-3|
|z_baz|3|
Here we should write both postings only once. And on {{\*a\*}} query find both 
posting with a prefix query {{a\*}}. 


  


was (Author: mkhludnev):
Let's index six one word docs:
|foo|
|foo|
|foo|
|bar|
|bar|
|bar|

h3. Index with ReversedWildcardFilter

|term|posting offset (relative)|
|1oof|0|
|1rab|3|
|bar|3| 
|foo|3|

|Postings (absolute values)|
|0,1,2|
|3,4,5|
|3,4,5|
|0,1,2|

Here you see that postings (and positions) are duplicated for every derived 
term.

h2. Proposal - DRY

|term|posting offset (relative)|
|1oof|0|
|1rab|3|
|bar|-3| 
|foo|-3|

|Postings (absolute values)|
|0,1,2|
|3,4,5|

h2. Note
It seems like it's really challenging to implement, giving that codecs doesn't 
allow such tweaking, I had to change {{o.a.l.i}} classes. This code introduces 
the relation between terms see {{FreqProxTermsEnum.getTwinTerm()}} and so one 
(it's one of the ugliest pieces). It also requires to change the term block 
format: posting offsets are written in ZLong (instead of Vlong), since they 
need to be negative. I'm afraid it ruins a lot of tests, since I were 
interested in the only one {{TestReversedWildcardFilterFactory}}. It passes. I 
also experiment with 5M enwiki and it seems roughly works: RWF blows index from 
13G to 28G and this code keeps it at 17G and runs *leading queries fast.
It aims only {{RWF}} where derived term is 1-1 to the origin one. This patch 
for branch_6x.

h2. Disclaimer
Current patch is mad and dirty ({{trickedFields = Arrays.asList("one", 
"body_txt_en")}}, and plenty of {{sysout}} ), I've just scratched the idea. 

h2. TODO
- How to carry relation between origin and derived NGramm terms (1 - Many)? 
- How to adjust the current {{o.a.l.i}} to bring reduplicated postings to the 
codec?

h2. The next idea
For \*infix\* searches it needs to derive the following terms (for three 
{{bar}} docs and thee {{baz}} docs):
|term|position offset|
|ar_bar|0|
|az_baz|3|
|bar|-3|
|baz|3
|r_bar|-3|
|z_baz|3|
Here we should write both postings only once. And on {{\*a\*}} query find both 
posting with a prefix query {{a\*}}. 


  

> Don't repeat postings and positions on ReverseWF, EdgeNGram, etc  
> --
>
> Key: LUCENE-7863
> URL: https://issues.apache.org/jira/browse/LUCENE-7863
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Mikhail Khludnev
> Attachments: LUCENE-7863.hazard
>
>
> h2. Context
> \*suffix and \*infix\* searches on large indexes. 
> h2. Problem
> Obviously applying {{ReversedWildcardFilter}} doubles an index size, and I'm 
> shuddering to think about EdgeNGrams...
> h2. Proposal 
> _DRY_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-

[jira] [Updated] (SOLR-10891) BBoxField does not support point-based number sub-fields

2017-06-14 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10891:
--
Attachment: SOLR-10891.patch
tests-failures.txt

Patch that adds a point-based bbox variant field type to TestSolr4Spatial.

I got past a couple of initial problems - you can see in the patch excerpt 
below that there is an explicit test for a Trie field, which I replaced with a 
check for Numeric field + doc values; and attempting to set the is-stored 
property on the Lucene points fieldtype failed because the fieldtype was 
already frozen, so I substituted an unfrozen copy first.

{noformat}
index 4d773c96ac..7552176b15 100644
--- a/solr/core/src/java/org/apache/solr/schema/BBoxField.java
+++ b/solr/core/src/java/org/apache/solr/schema/BBoxField.java
@@ -89,8 +89,11 @@ public class BBoxField extends 
AbstractSpatialFieldType implements
 if (!(booleanType instanceof BoolField)) {
   throw new RuntimeException("Must be a BoolField: " + booleanType);
 }
-if (!(numberType instanceof TrieDoubleField)) { // TODO support TrieField 
(any trie) once BBoxStrategy does
-  throw new RuntimeException("Must be TrieDoubleField: " + numberType);
+if (numberType.getNumberType() != NumberType.DOUBLE) {
+  throw new RuntimeException("Must be Double number type: " + numberType);
+}
+if ( ! numberType.hasProperty(DOC_VALUES)) {
+  throw new RuntimeException("Must have doc values: " + numberType);
 }
 
 //note: this only works for explicit fields, not dynamic fields
@@ -138,7 +141,9 @@ public class BBoxField extends 
AbstractSpatialFieldType implements
 final SchemaField solrNumField = new SchemaField("_", numberType);//dummy 
temp
 org.apache.lucene.document.FieldType luceneType =
 (org.apache.lucene.document.FieldType) 
solrNumField.createField(0.0).fieldType();
+luceneType = new org.apache.lucene.document.FieldType(luceneType);
 luceneType.setStored(storeSubFields);
+luceneType.freeze();
{noformat}

But several of the tests are failing for reasons I don't understand.  I'm 
attaching the log: [^tests-failures.txt].

[~dsmiley], could you take a look?

> BBoxField does not support point-based number sub-fields
> 
>
> Key: SOLR-10891
> URL: https://issues.apache.org/jira/browse/SOLR-10891
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10891.patch, tests-failures.txt
>
>
> I noticed while removing Trie fields from example schemas on SOLR-10760 that 
> BBoxField uses Trie fields in at least one example schema.
> I went looking and there is theoretical machinery to support points, but when 
> I added a point-based bbox variant to TestSolr4Spatial, I get test failures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10824) java.lang.NullPointerException ExactSharedStatsCache.getPerShardTermStats

2017-06-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049672#comment-16049672
 ] 

Mikhail Khludnev commented on SOLR-10824:
-

Steps to reproduce:
* launched cloud example 4 nodes, 
* created test collection 3x3 (my gut feeling is that it's caused by 3 
replicas, at least it's reproduced when it's 3).
* indexed example docs
* added {{}} via {{zk 
downconfig/vim/zk upconfig}}
* here we go
{code}

  true
  500
  14
  
_text_:all
on
xml
1497475014424
  


  java.lang.NullPointerException
at 
org.apache.solr.search.stats.ExactSharedStatsCache.getPerShardTermStats(ExactSharedStatsCache.java:76)
at 
org.apache.solr.search.stats.ExactStatsCache.sendGlobalStats(ExactStatsCache.java:233)
at 
org.apache.solr.handler.component.QueryComponent.createMainQuery(QueryComponent.java:942)
at 
org.apache.solr.handler.component.QueryComponent.regularDistributedProcess(QueryComponent.java:738)
at 
org.apache.solr.handler.component.QueryComponent.distributedProcess(QueryComponent.java:691)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:346)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
{code}
What's your bets and opinions? 

> java.lang.NullPointerException ExactSharedStatsCache.getPerShardTermStats 
> --
>
> Key: SOLR-10824
> URL: https://issues.apache.org/jira/browse/SOLR-10824
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Mikhail Khludnev
>Priority: Minor
>
> {quote}
>  INFO [qtp411311908-32] (SolrCore.java:2304) - [collection1_shard3_replica2]  
> webapp=/solr path=/select 
> params={..=false&_stateVer_=collection1:5...=32768=http://127.0.0.1:57114/solr/collection1_shard3_replica2/|http://127.0.0.1:57112/solr/collection1_shard3_replica1/=2)=1496751847089=true=FRM=javabin}
>  status=0 QTime=18
> INFO [qtp2123780104-30] (SolrCore.java:2304) - [collection1_shard1_replica1] 
> ...
>  INFO [qtp411311908-45] (SolrCore.java:2304) - [collection1_shard2_replica1]  
> ...
> ERROR [qtp411311908-33] (SolrException.java:148) - 
> java.lang.NullPointerException
>   at 
> org.apache.solr.search.stats.ExactSharedStatsCache.getPerShardTermStats(ExactSharedStatsCache.java:76)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.sendGlobalStats(ExactStatsCache.java:233)
>   at 
> org.apache.solr.handler.component.QueryComponent.createMainQuery(QueryComponent.java:930)
>   at 
> org.apache.solr.handler.component.QueryComponent.regularDistributedProcess(QueryComponent.java:726)
>   at 
> org.apache.solr.handler.component.QueryComponent.distributedProcess(QueryComponent.java:679)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:345)
>  INFO [qtp411311908-33] (SolrCore.java:2304) - [collection1_shard3_replica2]  
> webapp=/solr path=/select params={...=javabin=2} status=500 
> QTime=82
> {quote}
> Switching to {{LRUStatsCache}} seems help.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049668#comment-16049668
 ] 

Erick Erickson commented on SOLR-10574:
---

[~janhoy] [~arafalov] Defaulting to searching all fields works for me, and in 
fact is superior to the catch-all field IMO since when the user found out that 
the queries were slow it would be a configuration change rather than a 
re-index. This latter would be necessary in the catch-all case to get _rid_ of 
the extra data in the _text_ field.

I'm happy with any solution that satisfies the condition that if a new user 
indexes some data then does a non-fielded query they get results.

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10824) java.lang.NullPointerException ExactSharedStatsCache.getPerShardTermStats

2017-06-14 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-10824:

Affects Version/s: 6.6

> java.lang.NullPointerException ExactSharedStatsCache.getPerShardTermStats 
> --
>
> Key: SOLR-10824
> URL: https://issues.apache.org/jira/browse/SOLR-10824
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Mikhail Khludnev
>Priority: Minor
>
> {quote}
>  INFO [qtp411311908-32] (SolrCore.java:2304) - [collection1_shard3_replica2]  
> webapp=/solr path=/select 
> params={..=false&_stateVer_=collection1:5...=32768=http://127.0.0.1:57114/solr/collection1_shard3_replica2/|http://127.0.0.1:57112/solr/collection1_shard3_replica1/=2)=1496751847089=true=FRM=javabin}
>  status=0 QTime=18
> INFO [qtp2123780104-30] (SolrCore.java:2304) - [collection1_shard1_replica1] 
> ...
>  INFO [qtp411311908-45] (SolrCore.java:2304) - [collection1_shard2_replica1]  
> ...
> ERROR [qtp411311908-33] (SolrException.java:148) - 
> java.lang.NullPointerException
>   at 
> org.apache.solr.search.stats.ExactSharedStatsCache.getPerShardTermStats(ExactSharedStatsCache.java:76)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.sendGlobalStats(ExactStatsCache.java:233)
>   at 
> org.apache.solr.handler.component.QueryComponent.createMainQuery(QueryComponent.java:930)
>   at 
> org.apache.solr.handler.component.QueryComponent.regularDistributedProcess(QueryComponent.java:726)
>   at 
> org.apache.solr.handler.component.QueryComponent.distributedProcess(QueryComponent.java:679)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:345)
>  INFO [qtp411311908-33] (SolrCore.java:2304) - [collection1_shard3_replica2]  
> webapp=/solr path=/select params={...=javabin=2} status=500 
> QTime=82
> {quote}
> Switching to {{LRUStatsCache}} seems help.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10891) BBoxField does not support point-based number sub-fields

2017-06-14 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-10891:
-

 Summary: BBoxField does not support point-based number sub-fields
 Key: SOLR-10891
 URL: https://issues.apache.org/jira/browse/SOLR-10891
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


I noticed while removing Trie fields from example schemas on SOLR-10760 that 
BBoxField uses Trie fields in at least one example schema.

I went looking and there is theoretical machinery to support points, but when I 
added a point-based bbox variant to TestSolr4Spatial, I get test failures.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10317) Solr Nightly Benchmarks

2017-06-14 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-10317:
---
Attachment: SOLR-10317.patch

> Solr Nightly Benchmarks
> ---
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
>  Issue Type: Task
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
> Attachments: changes-lucene-20160907.json, 
> changes-solr-20160907.json, managed-schema, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf, 
> SOLR-10317.patch, solrconfig.xml
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be 
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr 
> nodes, both in SolrCloud and standalone mode, and record timing information 
> of various operations like indexing, querying, faceting, grouping, 
> replication etc.
> # It should be possible to run them either as an independent suite or as a 
> Jenkins job, and we should be able to report timings as graphs (Jenkins has 
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it 
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md 
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr 
> in some of these frameworks above. However, the benchmarks run are very 
> limited. Any of these can be a starting point, or a new framework can as well 
> be used. The motivation is to be able to cover every functionality of Solr 
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure 
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10317) Solr Nightly Benchmarks

2017-06-14 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049655#comment-16049655
 ] 

Michael Sun commented on SOLR-10317:


Just uploaded the first cut of Solr benchmark I built during my work, as one 
more option for community for benchmarking. There are a few good benchmarks in 
the community for different use cases, using different frameworks. The goal of 
my benchmark, in short, is to design an extensible, standardized benchmark that 
can be used for a variety of common performance use cases. Nightly performance 
regression tests are very important. Meanwhile it would be good if we can reuse 
the same benchmark for capacity planning, scalability study, troubleshooting, 
etc., which has slightly different requirement to nightly tests. It would be a 
good saving for everyone in community if he only needs to extend the benchmark, 
not rebuild one, for his own use cases in near future.

In addition, the benchmark includes a variety of instruments to help understand 
why the performance is, in addition to what the performance is. One obvious 
reason is that answering why is the primary goal for some use cases, such as 
troubleshooting, scalability study. Meanwhile it also helps to build 'correct' 
performance tests. For example, performance bottleneck discovered in tests may 
not be a code defect but some setup issue. Being able to analyze a bit can make 
sure the performance tests are testing the right thing. 

Designing a good benchmark is one of my primary jobs at work. So I will 
continue to elaborate the framework and add new tests. There are a few good 
benchmarks for Solr. Also [~vivek.nar...@uga.edu] has done a great job in 
designing a few new test cases. I can help you in porting or adding new test 
cases with my framework if you like.

The patch mainly includes object model and a sample test to demonstrate object 
model. More components will follow. It's an option to community of course but I 
do think community can benefit from this contribution. Any feedback is 
appreciated.



> Solr Nightly Benchmarks
> ---
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
>  Issue Type: Task
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
> Attachments: changes-lucene-20160907.json, 
> changes-solr-20160907.json, managed-schema, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf, 
> solrconfig.xml
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be 
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr 
> nodes, both in SolrCloud and standalone mode, and record timing information 
> of various operations like indexing, querying, faceting, grouping, 
> replication etc.
> # It should be possible to run them either as an independent suite or as a 
> Jenkins job, and we should be able to report timings as graphs (Jenkins has 
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it 
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md 
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr 
> in some of these frameworks above. However, the benchmarks run are very 
> limited. Any of these can be a starting point, or a new framework can as well 
> be used. The motivation is to be able to cover every functionality of Solr 
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure 
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9177) Support oom hook when running Solr in foreground mode

2017-06-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049643#comment-16049643
 ] 

Shawn Heisey commented on SOLR-9177:


Strong +1 from me.  I found this issue after seeing this in the IRC channel 
today:

{noformat}
14:37 < wryfi> elyograg: looking at bin/solr, i see that -XX:OnOutOfMemoryError
   is only set when the process is backgrounded, whereas upstart
   likes to run things in the foreground. so that explains why the
   option was missing. i wonder what the reasoning was behind that
   and/or if this should be a bug report.
{noformat}


> Support oom hook when running Solr in foreground mode
> -
>
> Key: SOLR-9177
> URL: https://issues.apache.org/jira/browse/SOLR-9177
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>
> After reading through the comments on SOLR-8145 and from my own experience, 
> seems like a reasonable number of people run Solr in foreground mode in 
> production.
> To give some more context, I've seen Solr hit OOM, which leads to IW being 
> closed by Lucene. The Solr process hangs in there and without the oom killer, 
> while all queries continue to work, all update requests start failing.
> I think it makes sense to add support to the bin/solr script to add the oom 
> hook when running in fg mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_131) - Build # 6648 - Still Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6648/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionAddReplica

Error Message:
Error from server at http://127.0.0.1:51029/solr: delete the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:51029/solr: delete the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([DF0BF1BC1F11E7E7:5F2B94920E520F41]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:592)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:219)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:459)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:389)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1130)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:177)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:442)
at 
org.apache.solr.cloud.autoscaling.TestPolicyCloud.after(TestPolicyCloud.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:965)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049609#comment-16049609
 ] 

Alexandre Rafalovitch commented on SOLR-10574:
--

I made a proposal 5 days ago in this issue that I thought was an interesting 
alternative to at least discuss (search for autofields). But I think it may 
have been lost in all other activities here. I would love somebody to comment 
on it even if it is not a valid approach in the end for this specific problem.

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reference guide editing: newbie notes and kudos to Cassandra and Hoss:

2017-06-14 Thread Erick Erickson
David:

I have the PDF version of every release b/c I often need to search
specific versions and/or find it easier to use if it's all in one
doc

FWIW,
Erick

On Wed, Jun 14, 2017 at 10:59 AM, Cassandra Targett
 wrote:
> I'm *really* glad people are finding it a positive change.
>
> We definitely need to have it searchable, but someone needs to work on
> making it happen: https://issues.apache.org/jira/browse/SOLR-10299.
>
> On Wed, Jun 14, 2017 at 10:49 AM, David Smiley  
> wrote:
>> Thanks to Cassandra and Hoss indeed!
>>
>> Is there going to be search of the ref guide somehow?  If it's searchable
>> somewhere else then we could at least refer users there. Quick title access
>> is working at least.
>>
>> On Wed, May 31, 2017 at 11:01 AM Jan Høydahl  wrote:
>>>
>>> Agree, Erick. The new guide is amazing!
>>>
>>> Steve, could we perhaps have a change-history.adoc as part of the refguide
>>> itself, so every (major) change to the guide would add a line on that page?
>>> Benefit is that it would follow the guide, whether in HTML or PDF format.
>>>
>>> Another option is to just keep it lightweight and do some kind of GIT
>>> magic as part of refGuide release process to select all commits since last
>>> release that include adoc changes, and then pull the commit messages from
>>> those and generate a release-notes file. We could also include in that file
>>> a git diff that could be used as an aid for people when reviewing/voting for
>>> a ref-guide release, i.e. one place to double check that no weird edits have
>>> sneaked in since last release…
>>>
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>>
>>> > 30. mai 2017 kl. 20.37 skrev Steve Rowe :
>>> >
>>> >
>>> >> On May 30, 2017, at 2:04 PM, Erick Erickson 
>>> >> wrote:
>>> >>
>>> >> 4> I don't think minor edits require a JIRA, larger ones maybe. Pretty
>>> >> much just like the CWiki I suppose.
>>> >
>>> > One more thing I ran into while making the changes on SOLR-10758:
>>> >
>>> > 5> I included a new "Ref Guide” section under the 6.6 release in
>>> > solr/CHANGES.txt, but this was premature, since: a) the ref guide release 
>>> > is
>>> > still separate from the code release, so solr/CHANGES.txt isn’t the right
>>> > place (yet); and b) even after we make the ref guide release part of the
>>> > code release, it’s not clear that ref guide change notes belong in
>>> > solr/CHANGES.txt, since e.g. javadocs-only changes never get mentioned
>>> > there.  (Personally I think there should eventually be some form of
>>> > CHANGES-like release notes for the ref guide.)
>>> >
>>> > (I haven’t reverted my “Ref Guide” section addition to solr/CHANGES.txt
>>> > because there is a 6.6 RC vote underway, and if it succeeds reversion will
>>> > be pointless.)
>>> >
>>> > --
>>> > Steve
>>> > www.lucidworks.com
>>> >
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7878) QueryParser AND default operator and MultiWords synonyms failed if keywords exactly matches a synonym

2017-06-14 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049587#comment-16049587
 ] 

Emmanuel Keller commented on LUCENE-7878:
-

Hi Jim,
I confirm is it fixed. Thanks for that extremely fast bug resolution :D
It would be great if this patch could be applied also to 6.5 / 6.6 branches.


> QueryParser AND default operator and MultiWords synonyms failed if keywords 
> exactly matches a synonym
> -
>
> Key: LUCENE-7878
> URL: https://issues.apache.org/jira/browse/LUCENE-7878
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: master (7.0), 6.6, 6.5.1
> Environment: 1.8.0_131-b11 on Mac OS X
>Reporter: Emmanuel Keller
>  Labels: multi-word, query-parser, synonyms
> Attachments: LUCENE-7878.patch, LUCENE-7878.patch
>
>
> This issue is about using the QueryParser with MultiWordsSynonyms.
> To reproduce the bug:
> - Use AND as default operator
> - Use a query string which exactly matches one synonym.
> In short, the part of the query which handle the synonym lookup should keep a 
> "OR" relation between the synonyms, but it is translated as a "AND".
> If I parse: "guinea pig" which is a synonym of "cavy":
> Using default OR, I get something correct:
> "(+guinea +pig) cavy"
> Note: I should probably better have ((+guinea +pic) cavy)
> Using AND as default operator, I get something wrong:
> +(+guinea +pig) +cavy
> I was expected:
> +((+guinea +pig) cavy)
> The relation between "guinea pig" and "cavy" is now a AND. It should be still 
> a OR because it is a synonym clause.
> To help understanding. If now I parse "guinea pig world"
> And I get the expected result:
> +((+guinea +pig) cavy) +world
> The relation between "guinea pig" and "cavy" is a OR as expected (it is a 
> synonym), and the relation with "world" is AND as expected by the default 
> operator.
> Here is the additional unit test for, I hope it is pretty self-explanatory:
> org.apache.lucene.queryparser.classic.TestQueryParser
> {code:java}
> public void testDefaultOperatorWhenKeywordsMatchesExactlyOneSynonym() throws 
> ParseException {
> // Using the default OR operator
> QueryParser smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.OR);
> assertEquals("(+guinea +pig) cavy", smart.parse("guinea 
> pig").toString("field"));
> // Using the default AND operator
> smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.AND);
> assertEquals("+((+guinea +pig) cavy) +world", smart.parse("guinea pig 
> world").toString("field"));
> // Using the default AND operator
> smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.AND);
> assertEquals("+((+guinea +pig) cavy)", smart.parse("guinea 
> pig").toString("field"));
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Compile all files (including tests)

2017-06-14 Thread Jason Gerlowski
Yikes, that seems obvious in hindsight.  I've verified that does the
trick.  Thanks Uwe!

On Wed, Jun 14, 2017 at 3:29 PM, Uwe Schindler  wrote:
> Ant compile-test?
>
> Am 14. Juni 2017 21:09:36 MESZ schrieb Jason Gerlowski
> :
>>Hey all,
>>
>>Is there an ant-command we support for compiling *all* files
>>(including tests) at once.
>>
>>My understanding is that "ant test" compiles each test package right
>>before it's run. This is occasionally a pain, as test runs can
>>occasionally fail halfway through because of a compilation error that
>>could've been caught up-front.
>>
>>It'd be really convenient if there was a command that supported the
>>use case of: "just make sure everything compiles". Does anyone know
>>of an ant command to do this that I'm just missing? ("ant test-help"
>>doesn't mention any options that seem helpful).
>>
>>Jason
>>
>>-
>
>>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
> Uwe Schindler
> Achterdiek 19, 28357 Bremen
> https://www.thetaphi.de
> --
> Uwe Schindler
> Achterdiek 19, 28357 Bremen
> https://www.thetaphi.de

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049575#comment-16049575
 ] 

Jan Høydahl commented on SOLR-10574:


bq. Jan Hoydahl to be present in schema: yesused by 
default:no
Long-term I want no catch-all field at all. Because no matter how much we 
document and try to educate, reality is that the defaults (or at least the 
practices used by the defaults) will end up in production for a high percentage 
of installs.

Instead let's consider an ability for the ootb default configsets to auto 
search all fields if neither {{df}} or {{qf}} are specified. A potential 
fast-track solution is to extend {{SimpleQParserPlugin}} to interpret {{qf=\*}} 
as a catch-all mode where it simply iterates all indexed fields in schema and 
searches across these. We could then add to our {{/select}} and {{/query}} 
handlers in the default config sets: {{defType=simple=*}}. Or we could make 
{{simple}} the new default parser instead of {{lucene}} (horrible name btw). 
This could of course be introduced in 7.x and start with catchall _text_ in 
7.0.0...

With a {{qf=*}} catch-all, the WARNING in docs needs to instead be a warning 
that {{qf}} should be tuned or else the query may be too expensive for indices 
with many fields. Another issue with this approach is for installs where the 
schema lists hundreds of fields but most docs in the index contain only a 
handful fields. It could perhaps be possible to do a two-phase search where the 
first phase is to compute fields in use for the doc set after applying all 
fq's, and then phase 2 to search across those fields.

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10890) Parallel SQL - column not found error

2017-06-14 Thread Yury Kats (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049576#comment-16049576
 ] 

Yury Kats commented on SOLR-10890:
--

Mail list discussion: http://markmail.org/message/vsxb726cdrhflst7

> Parallel SQL - column not found error
> -
>
> Key: SOLR-10890
> URL: https://issues.apache.org/jira/browse/SOLR-10890
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.6
>Reporter: Susheel Kumar
>Priority: Minor
>
> Parallel SQL throws "column not found" error when the query hits multiple 
> shards and one of shard doesn't have any documents yet. 
> Sample error
> == 
> {"result-set":{"docs":[{"EXCEPTION":"Failed to execute sqlQuery 'SELECT  
> sr_sv_userFirstName as firstName, sr_sv_userLastName as lastName FROM 
> collection1 ORDEr BY dv_sv_userLastName LIMIT 15' against JDBC connection 
> 'jdbc:calcitesolr:'.\nError while executing SQL \"SELECT  sr_sv_userFirstName 
> as firstName, sr_sv_userLastName as lastName FROM collection1 ORDEr BY 
> dv_sv_userLastName LIMIT 15\": From line 1, column 9 to line 1, column 27: 
> Column 'sr_sv_userFirstName' not found in any 
> table","EOF":true,"RESPONSE_TIME":87}]}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Compile all files (including tests)

2017-06-14 Thread Uwe Schindler
Ant compile-test?

Am 14. Juni 2017 21:09:36 MESZ schrieb Jason Gerlowski :
>Hey all,
>
>Is there an ant-command we support for compiling *all* files
>(including tests) at once.
>
>My understanding is that "ant test" compiles each test package right
>before it's run.  This is occasionally a pain, as test runs can
>occasionally fail halfway through because of a compilation error that
>could've been caught up-front.
>
>It'd be really convenient if there was a command that supported the
>use case of: "just make sure everything compiles".  Does anyone know
>of an ant command to do this that I'm just missing?  ("ant test-help"
>doesn't mention any options that seem helpful).
>
>Jason
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de
--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

Re: Compile all files (including tests)

2017-06-14 Thread Uwe Schindler
Ant compile-test?

Am 14. Juni 2017 21:09:36 MESZ schrieb Jason Gerlowski :
>Hey all,
>
>Is there an ant-command we support for compiling *all* files
>(including tests) at once.
>
>My understanding is that "ant test" compiles each test package right
>before it's run.  This is occasionally a pain, as test runs can
>occasionally fail halfway through because of a compilation error that
>could've been caught up-front.
>
>It'd be really convenient if there was a command that supported the
>use case of: "just make sure everything compiles".  Does anyone know
>of an ant command to do this that I'm just missing?  ("ant test-help"
>doesn't mention any options that seem helpful).
>
>Jason
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

[jira] [Commented] (SOLR-10079) TestInPlaceUpdates(Distrib|Standalone) failures

2017-06-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049557#comment-16049557
 ] 

Steve Rowe commented on SOLR-10079:
---

My Jenkins found a reproducing master seed for a timeout failure:

{noformat}
Checking out Revision f470bbcbdc930c24c3b1e301d529a26c046f195f 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test 
-Dtests.seed=F02AC8BA5333D665 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=pt -Dtests.timezone=America/Indianapolis -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   1256s J3 | TestInPlaceUpdatesDistrib.test <<<
   [junit4]> Throwable #1: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:45981/xj/mx/collection1
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F02AC8BA5333D665:787EF760FDCFBB9D]:0)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:603)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:219)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:177)
   [junit4]>at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.addDocAndGetVersion(TestInPlaceUpdatesDistrib.java:1080)
   [junit4]>at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.buildRandomIndex(TestInPlaceUpdatesDistrib.java:1125)
   [junit4]>at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:329)
   [junit4]>at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:156)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: java.net.SocketTimeoutException: Read timed out
   [junit4]>at java.net.SocketInputStream.socketRead0(Native Method)
   [junit4]>at 
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
   [junit4]>at 
java.net.SocketInputStream.read(SocketInputStream.java:170)
   [junit4]>at 
java.net.SocketInputStream.read(SocketInputStream.java:141)
   [junit4]>at 
sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
   [junit4]>at 
sun.security.ssl.InputRecord.read(InputRecord.java:503)
   [junit4]>at 
sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
   [junit4]>at 
sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
   [junit4]>at 
sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
   [junit4]>at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
   [junit4]>at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
   [junit4]>at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
   [junit4]>at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
   [junit4]>at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
   [junit4]>at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
   [junit4]>at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
   [junit4]>at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
   [junit4]>at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
   [junit4]>at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
   [junit4]>at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
   [junit4]>at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
   [junit4]>at 
org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
   [junit4]>at 

[jira] [Created] (SOLR-10890) Parallel SQL - column not found error

2017-06-14 Thread Susheel Kumar (JIRA)
Susheel Kumar created SOLR-10890:


 Summary: Parallel SQL - column not found error
 Key: SOLR-10890
 URL: https://issues.apache.org/jira/browse/SOLR-10890
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Parallel SQL
Affects Versions: 6.6
Reporter: Susheel Kumar
Priority: Minor


Parallel SQL throws "column not found" error when the query hits multiple 
shards and one of shard doesn't have any documents yet. 

Sample error
== 
{"result-set":{"docs":[{"EXCEPTION":"Failed to execute sqlQuery 'SELECT  
sr_sv_userFirstName as firstName, sr_sv_userLastName as lastName FROM 
collection1 ORDEr BY dv_sv_userLastName LIMIT 15' against JDBC connection 
'jdbc:calcitesolr:'.\nError while executing SQL \"SELECT  sr_sv_userFirstName 
as firstName, sr_sv_userLastName as lastName FROM collection1 ORDEr BY 
dv_sv_userLastName LIMIT 15\": From line 1, column 9 to line 1, column 27: 
Column 'sr_sv_userFirstName' not found in any 
table","EOF":true,"RESPONSE_TIME":87}]}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Compile all files (including tests)

2017-06-14 Thread Jason Gerlowski
Hey all,

Is there an ant-command we support for compiling *all* files
(including tests) at once.

My understanding is that "ant test" compiles each test package right
before it's run.  This is occasionally a pain, as test runs can
occasionally fail halfway through because of a compilation error that
could've been caught up-front.

It'd be really convenient if there was a command that supported the
use case of: "just make sure everything compiles".  Does anyone know
of an ant command to do this that I'm just missing?  ("ant test-help"
doesn't mention any options that seem helpful).

Jason

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10834) test configs should be changed to stop using numeric based uniqueKey field

2017-06-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049470#comment-16049470
 ] 

Hoss Man commented on SOLR-10834:
-

bq. I am however still seeing some fairly consistent failure from SuggesterTest 
and it's subclasses  ...

I can not for the life of me explain how SuggesterTest manages to pass on 
master -- it has some wacky "reload the core" logic in {{testReload()}} that 
winds up creating a new core with the default {{solrconfig.xml}} and 
{{schema.xml}} instead of the {{*-spellchecker.xml}} files the core was 
originally using, and that new core with it's totally not the correct 
config/schema can wind up being used in other test methods that come later.  

Somehow, on master, where {{schema.xml}} was using an "int" id field this 
didn't manage to cause any failures, but in our SOLR-10834 branch where both 
schemas agree that "id" is a string we get problems.  I ripped out most of this 
hooky "reload" code and replaced it with {{TestHarness.reload()}} and all is 
right with the world.

At present, all tests & precommit pass on the branch ... i'll aim to squash 
merge it with master tomorrow.

> test configs should be changed to stop using numeric based uniqueKey field
> --
>
> Key: SOLR-10834
> URL: https://issues.apache.org/jira/browse/SOLR-10834
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Apparently, once upon a time, as a way to prove that it worked and there were 
> no hard coded "String" assumptions, some the {{schema.xml}} used by tests was 
> written such that the uniqueKey field was definied as an "int".
> This has now snowballed such that there are at least 40 test schema files 
> (just in solr/core!) that define the uniqueKey field using a Trie field 
> (mostly TrieInt, but at least 2 TrieFloats!) despite the fact that at no 
> point have we ever recommended/encouraged people to use anything other then 
> StrField for their uniqueKey.
> that's nearly 1/3 of all the test schemas that we have -- which IIRC (from 
> some early experiments in SOLR-10807) are used in more then half the 
> solr/core tests.
> If we want to be able to deprecate/remove Trie fields in favor of point 
> fields, we're really going to update all of these test schemas to use a 
> StrField (we can't use PointFields as the uniqueKey due to the issues noted 
> in SOLR-10829) ... but AFAICT that's going to require a non trivial amount of 
> work due to many of these tests making explicit/implicit assumptions about 
> the datatype of the uniqueKey field (ex: sorting by id, range queries on ids, 
> casting stored field values returned by solrj, xpath expressions, etc...)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 896 - Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/896/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "overlay":{ "znodeVersion":0, 
"runtimeLib":{"colltest":{ "name":"colltest", "version":1,  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":0,
"runtimeLib":{"colltest":{
"name":"colltest",
"version":1,  from server:  null
at 
__randomizedtesting.SeedInfo.seed([54C2D70AC406E989:8C8FFA5D33DB4C29]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10834) test configs should be changed to stop using numeric based uniqueKey field

2017-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049463#comment-16049463
 ] 

ASF subversion and git services commented on SOLR-10834:


Commit b26bf9d60e2b94e0cdc365d1e2c0a37c33e24198 in lucene-solr's branch 
refs/heads/jira/SOLR-10834 from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b26bf9d ]

Merge branch 'master' into jira/SOLR-10834


> test configs should be changed to stop using numeric based uniqueKey field
> --
>
> Key: SOLR-10834
> URL: https://issues.apache.org/jira/browse/SOLR-10834
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Apparently, once upon a time, as a way to prove that it worked and there were 
> no hard coded "String" assumptions, some the {{schema.xml}} used by tests was 
> written such that the uniqueKey field was definied as an "int".
> This has now snowballed such that there are at least 40 test schema files 
> (just in solr/core!) that define the uniqueKey field using a Trie field 
> (mostly TrieInt, but at least 2 TrieFloats!) despite the fact that at no 
> point have we ever recommended/encouraged people to use anything other then 
> StrField for their uniqueKey.
> that's nearly 1/3 of all the test schemas that we have -- which IIRC (from 
> some early experiments in SOLR-10807) are used in more then half the 
> solr/core tests.
> If we want to be able to deprecate/remove Trie fields in favor of point 
> fields, we're really going to update all of these test schemas to use a 
> StrField (we can't use PointFields as the uniqueKey due to the issues noted 
> in SOLR-10829) ... but AFAICT that's going to require a non trivial amount of 
> work due to many of these tests making explicit/implicit assumptions about 
> the datatype of the uniqueKey field (ex: sorting by id, range queries on ids, 
> casting stored field values returned by solrj, xpath expressions, etc...)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10876) Regression in loading runtime UpdateRequestProcessors

2017-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049462#comment-16049462
 ] 

ASF subversion and git services commented on SOLR-10876:


Commit c3c895548f6334566c20f2396a33fdc8c031ab89 in lucene-solr's branch 
refs/heads/jira/SOLR-10834 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c3c8955 ]

SOLR-10876: Regression in loading runtime UpdateRequestProcessors like 
TemplateUpdateProcessorFactory


> Regression in loading runtime UpdateRequestProcessors
> -
>
> Key: SOLR-10876
> URL: https://issues.apache.org/jira/browse/SOLR-10876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-10876.patch
>
>
> This was introduced as a part of SOLR-9530



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10876) Regression in loading runtime UpdateRequestProcessors

2017-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049461#comment-16049461
 ] 

ASF subversion and git services commented on SOLR-10876:


Commit 92b17838a346ad55a6a4ab796b8ab8cbbe4ffea2 in lucene-solr's branch 
refs/heads/jira/SOLR-10834 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=92b1783 ]

SOLR-10876: Regression in loading runtime UpdateRequestProcessors like 
TemplateUpdateProcessorFactory


> Regression in loading runtime UpdateRequestProcessors
> -
>
> Key: SOLR-10876
> URL: https://issues.apache.org/jira/browse/SOLR-10876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-10876.patch
>
>
> This was introduced as a part of SOLR-9530



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7876) Avoid needless calls to LeafReader.fields and MultiFields.getFields

2017-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049460#comment-16049460
 ] 

ASF subversion and git services commented on LUCENE-7876:
-

Commit f470bbcbdc930c24c3b1e301d529a26c046f195f in lucene-solr's branch 
refs/heads/jira/SOLR-10834 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f470bbc ]

LUCENE-7876 avoid leafReader.fields


> Avoid needless calls to LeafReader.fields and MultiFields.getFields
> ---
>
> Key: LUCENE-7876
> URL: https://issues.apache.org/jira/browse/LUCENE-7876
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0), 6.7
>
> Attachments: LUCENE_7876_avoid_leafReader_fields.patch
>
>
> In LUCENE-7500 we're removing LeafReader.fields for 7.x.  Here in this issue 
> for 6.x and 7.x we simply avoid calling this method (and also 
> MultiFields.getFields) when there is an obvious replacement for 
> LeafReader.terms(field) (and MultiFields.getTerms).  Any absolutely 
> non-trivial changes are occurring in LUCENE-7500.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reference guide editing: newbie notes and kudos to Cassandra and Hoss:

2017-06-14 Thread Cassandra Targett
I'm *really* glad people are finding it a positive change.

We definitely need to have it searchable, but someone needs to work on
making it happen: https://issues.apache.org/jira/browse/SOLR-10299.

On Wed, Jun 14, 2017 at 10:49 AM, David Smiley  wrote:
> Thanks to Cassandra and Hoss indeed!
>
> Is there going to be search of the ref guide somehow?  If it's searchable
> somewhere else then we could at least refer users there. Quick title access
> is working at least.
>
> On Wed, May 31, 2017 at 11:01 AM Jan Høydahl  wrote:
>>
>> Agree, Erick. The new guide is amazing!
>>
>> Steve, could we perhaps have a change-history.adoc as part of the refguide
>> itself, so every (major) change to the guide would add a line on that page?
>> Benefit is that it would follow the guide, whether in HTML or PDF format.
>>
>> Another option is to just keep it lightweight and do some kind of GIT
>> magic as part of refGuide release process to select all commits since last
>> release that include adoc changes, and then pull the commit messages from
>> those and generate a release-notes file. We could also include in that file
>> a git diff that could be used as an aid for people when reviewing/voting for
>> a ref-guide release, i.e. one place to double check that no weird edits have
>> sneaked in since last release…
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>>
>> > 30. mai 2017 kl. 20.37 skrev Steve Rowe :
>> >
>> >
>> >> On May 30, 2017, at 2:04 PM, Erick Erickson 
>> >> wrote:
>> >>
>> >> 4> I don't think minor edits require a JIRA, larger ones maybe. Pretty
>> >> much just like the CWiki I suppose.
>> >
>> > One more thing I ran into while making the changes on SOLR-10758:
>> >
>> > 5> I included a new "Ref Guide” section under the 6.6 release in
>> > solr/CHANGES.txt, but this was premature, since: a) the ref guide release 
>> > is
>> > still separate from the code release, so solr/CHANGES.txt isn’t the right
>> > place (yet); and b) even after we make the ref guide release part of the
>> > code release, it’s not clear that ref guide change notes belong in
>> > solr/CHANGES.txt, since e.g. javadocs-only changes never get mentioned
>> > there.  (Personally I think there should eventually be some form of
>> > CHANGES-like release notes for the ref guide.)
>> >
>> > (I haven’t reverted my “Ref Guide” section addition to solr/CHANGES.txt
>> > because there is a 6.6 RC vote underway, and if it succeeds reversion will
>> > be pointless.)
>> >
>> > --
>> > Steve
>> > www.lucidworks.com
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7878) QueryParser AND default operator and MultiWords synonyms failed if keywords exactly matches a synonym

2017-06-14 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-7878:
-
Attachment: LUCENE-7878.patch

A new patch that passes all the tests

> QueryParser AND default operator and MultiWords synonyms failed if keywords 
> exactly matches a synonym
> -
>
> Key: LUCENE-7878
> URL: https://issues.apache.org/jira/browse/LUCENE-7878
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: master (7.0), 6.6, 6.5.1
> Environment: 1.8.0_131-b11 on Mac OS X
>Reporter: Emmanuel Keller
>  Labels: multi-word, query-parser, synonyms
> Attachments: LUCENE-7878.patch, LUCENE-7878.patch
>
>
> This issue is about using the QueryParser with MultiWordsSynonyms.
> To reproduce the bug:
> - Use AND as default operator
> - Use a query string which exactly matches one synonym.
> In short, the part of the query which handle the synonym lookup should keep a 
> "OR" relation between the synonyms, but it is translated as a "AND".
> If I parse: "guinea pig" which is a synonym of "cavy":
> Using default OR, I get something correct:
> "(+guinea +pig) cavy"
> Note: I should probably better have ((+guinea +pic) cavy)
> Using AND as default operator, I get something wrong:
> +(+guinea +pig) +cavy
> I was expected:
> +((+guinea +pig) cavy)
> The relation between "guinea pig" and "cavy" is now a AND. It should be still 
> a OR because it is a synonym clause.
> To help understanding. If now I parse "guinea pig world"
> And I get the expected result:
> +((+guinea +pig) cavy) +world
> The relation between "guinea pig" and "cavy" is a OR as expected (it is a 
> synonym), and the relation with "world" is AND as expected by the default 
> operator.
> Here is the additional unit test for, I hope it is pretty self-explanatory:
> org.apache.lucene.queryparser.classic.TestQueryParser
> {code:java}
> public void testDefaultOperatorWhenKeywordsMatchesExactlyOneSynonym() throws 
> ParseException {
> // Using the default OR operator
> QueryParser smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.OR);
> assertEquals("(+guinea +pig) cavy", smart.parse("guinea 
> pig").toString("field"));
> // Using the default AND operator
> smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.AND);
> assertEquals("+((+guinea +pig) cavy) +world", smart.parse("guinea pig 
> world").toString("field"));
> // Using the default AND operator
> smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.AND);
> assertEquals("+((+guinea +pig) cavy)", smart.parse("guinea 
> pig").toString("field"));
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reference guide editing: newbie notes and kudos to Cassandra and Hoss:

2017-06-14 Thread David Smiley
Thanks to Cassandra and Hoss indeed!

Is there going to be search of the ref guide somehow?  If it's searchable
somewhere else then we could at least refer users there. Quick title access
is working at least.

On Wed, May 31, 2017 at 11:01 AM Jan Høydahl  wrote:

> Agree, Erick. The new guide is amazing!
>
> Steve, could we perhaps have a change-history.adoc as part of the refguide
> itself, so every (major) change to the guide would add a line on that page?
> Benefit is that it would follow the guide, whether in HTML or PDF format.
>
> Another option is to just keep it lightweight and do some kind of GIT
> magic as part of refGuide release process to select all commits since last
> release that include adoc changes, and then pull the commit messages from
> those and generate a release-notes file. We could also include in that file
> a git diff that could be used as an aid for people when reviewing/voting
> for a ref-guide release, i.e. one place to double check that no weird edits
> have sneaked in since last release…
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 30. mai 2017 kl. 20.37 skrev Steve Rowe :
> >
> >
> >> On May 30, 2017, at 2:04 PM, Erick Erickson 
> wrote:
> >>
> >> 4> I don't think minor edits require a JIRA, larger ones maybe. Pretty
> >> much just like the CWiki I suppose.
> >
> > One more thing I ran into while making the changes on SOLR-10758:
> >
> > 5> I included a new "Ref Guide” section under the 6.6 release in
> solr/CHANGES.txt, but this was premature, since: a) the ref guide release
> is still separate from the code release, so solr/CHANGES.txt isn’t the
> right place (yet); and b) even after we make the ref guide release part of
> the code release, it’s not clear that ref guide change notes belong in
> solr/CHANGES.txt, since e.g. javadocs-only changes never get mentioned
> there.  (Personally I think there should eventually be some form of
> CHANGES-like release notes for the ref guide.)
> >
> > (I haven’t reverted my “Ref Guide” section addition to solr/CHANGES.txt
> because there is a 6.6 RC vote underway, and if it succeeds reversion will
> be pointless.)
> >
> > --
> > Steve
> > www.lucidworks.com
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+173) - Build # 19866 - Still Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19866/
Java: 32bit/jdk-9-ea+173 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([38F714E166AE3BAC:A2036903F834A790]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:890)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:270)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:883)
... 39 more




Build Log:
[...truncated 12136 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Assigned] (SOLR-10177) Consolidate randomized usage of PointFields in schemas

2017-06-14 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-10177:
---

Assignee: Hoss Man

> Consolidate randomized usage of PointFields in schemas
> --
>
> Key: SOLR-10177
> URL: https://issues.apache.org/jira/browse/SOLR-10177
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Hoss Man
>
> schema-inplace-updates.xml use per fieldType point fields randomization, 
> whereas other some schemas use per-field. However, the variable name is 
> similar and should be revisited and standardized across our tests.
> Discussions here 
> https://issues.apache.org/jira/browse/SOLR-5944?focusedCommentId=15875108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15875108.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10177) Consolidate randomized usage of PointFields in schemas

2017-06-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049377#comment-16049377
 ] 

Hoss Man commented on SOLR-10177:
-

I actually think we can eliminate the randomized {{}} usage completely and focus solely on randomzing the {{}} approach (which would also simplify/reduce the 
number of fieldTypes we need to add to all schemas) ... see SOLR-10864 for 
details.

Assuming SOLR-10864 / SOLR-10807 moves forward, i'll fold SOLR-10177 into those 
and remove the existing "type name" randomization.

> Consolidate randomized usage of PointFields in schemas
> --
>
> Key: SOLR-10177
> URL: https://issues.apache.org/jira/browse/SOLR-10177
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>
> schema-inplace-updates.xml use per fieldType point fields randomization, 
> whereas other some schemas use per-field. However, the variable name is 
> similar and should be revisited and standardized across our tests.
> Discussions here 
> https://issues.apache.org/jira/browse/SOLR-5944?focusedCommentId=15875108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15875108.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10864) Add static (test only) boolean to PointField indicating 'precisionStep' should be ignored

2017-06-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049376#comment-16049376
 ] 

Hoss Man commented on SOLR-10864:
-

thanks steve.

I should have pointed out before: you can see an aproximation of what this will 
look like in the branch i've been working on for the parent issue: 
jira/SOLR-10807

In particular this commit gives the flavor of teh bulk of the change...

https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c76a79b

Unless their are any objections, i'll assume this choice is ok, and continue 
moving forwrad with this approach in branch jira/SOLR-10807 (once some other 
related bugs are fixed on maser) and resolve this issue.

> Add static (test only) boolean to PointField indicating 'precisionStep' 
> should be ignored
> -
>
> Key: SOLR-10864
> URL: https://issues.apache.org/jira/browse/SOLR-10864
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0)
>
>
> (I'm spinning this idea out of parent jira SOLR-10807 so that it gets it's 
> own jira# w/ it's own summary for increased visibility/comments)
> In the interest of making it easier & more straight forward to get good 
> randomized test coverage of Points fields, I'd like to add the following to 
> the {{PointField}} class...
> {code}
>  /**
>   * 
>   * The Test framework can set this global variable to instruct PointField 
> that
>   * (on init) it should be tollerant of the precisionStep 
> argument used by TrieFields.
>   * This allows for simple randomization of TrieFields and PointFields w/o 
> extensive duplication
>   * of fieldType/ declarations.
>   * 
>   *
>   * NOTE: When {@link TrieField} is removed, this boolean must also be 
> removed
>   *
>   * @lucene.internal
>   * @lucene.experimental
>   */
>  public static boolean TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS = false;
>  /** 
>   * NOTE: This method can be removed completely when
>   * {@link #TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS} is removed 
>   */
>  @Override
>  protected void init(IndexSchema schema, Map args) {
>super.init(schema, args);
>if (TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS) {
>  args.remove("precisionStep");
>}
>  }
> {code}
> Then in SolrTestCaseJ4, set 
> {{PointField.TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS}} on a class by class 
> basis when randomizing Trie/Points (and unset \@AfterClass).
> (details to follow in comment)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1367 - Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1367/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.MBeansHandlerTest.testDiff

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([314B24482EB4B91B:F45DE0D33E02817B]:0)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.diffObject(SolrInfoMBeanHandler.java:240)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.diffNamedList(SolrInfoMBeanHandler.java:219)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.getDiff(SolrInfoMBeanHandler.java:187)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:87)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:178)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2487)
at org.apache.solr.util.TestHarness.query(TestHarness.java:337)
at org.apache.solr.util.TestHarness.query(TestHarness.java:319)
at 
org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

Re: Solr macro expansion -- interfering with ManifoldCF posts?

2017-06-14 Thread David Smiley
I agree this is concerning.  This should at least be more publicity about
this... maybe FAQ on escaping user queries.  I chased through the details
and it appears macro expansion snuck in as part of the JSON Facet API,
SOLR-7214 for v5.1, even though it affects requests not related to the JSON
Facet feature.

Just now in an app I'm working on I used this macro expansion inside my
user query (parsed by edismax) with a reference to another parameter and,
sure enough, it did the substitution.  I changed it to an undefined
parameter and it throw an exception (not desirable for user queries)  I
suppose it's not that risky but it's something that shouldn't throw an
error if a user is perhaps searching a source code search engine, for
example, using such a pattern.  It might allow a user to spy on the
internals... like try to reference ${qf} and then have the search engine
spit back what your query was so you can then learn some of the fields
names to try and do more advanced queries.

Another sneaky thing is that even edismax lets you switch to another query
parser by starting your user query with {! etc. :-)  edismax in
particular ought not to allow that.  In my search apps I commonly put a
leading space to thwart this.

On Wed, Jun 14, 2017 at 9:49 AM Karl Wright  wrote:

> Hi Erik,
>
> I only have snippets of logs from the user so my details are limited.  I
> could request the parameter value for you if you like.  But since it's
> coming out of some large Documentum repository, pretty much anything is
> possible. ;-)  Documentum may have attributes that also use similar
> escaping for macros AFAICT.
>
> Karl
>
>
> On Wed, Jun 14, 2017 at 9:38 AM, Erik Hatcher 
> wrote:
>
>> Karl -
>>
>> There’s expandMacros=false, as covered here:
>> https://cwiki.apache.org/confluence/display/solr/Parameter+Substitution
>>
>> But… what exactly is being sent to Solr?Is there some kind of “${…”
>> being sent as a parameter?   Just curious what’s getting you into this in
>> the first place.   But disabling probably is your most desired solution.
>>
>> Erik
>>
>>
>> > On Jun 14, 2017, at 9:34 AM, Karl Wright  wrote:
>> >
>> > Hi all,
>> >
>> > I've got a ManifoldCF user who is posting content to Solr using the MCF
>> Solr output connector.  This connector uses SolrJ under the covers -- a
>> fairly recent version -- but also has overridden some classes to insure
>> that multipart form posts will be used for most content.
>> >
>> > The problem is that, for a specific document, the user is getting an
>> ArrayIndexOutOfBounds exception in Solr, as follows:
>> >
>> > >>
>> > 2017-06-14T08:25:16,546 - ERROR [qtp862890654-69725:SolrException@148]
>> - {collection=c:documentum_manifoldcf_stg,
>> core=x:documentum_manifoldcf_stg_shard1_replica1,
>> node_name=n:**:8983_solr, replica=r:core_node1, shard=s:shard1} -
>> java.lang.StringIndexOutOfBoundsException: String index out of range: -296
>> > at java.lang.String.substring(String.java:1911)
>> > at
>> org.apache.solr.request.macro.MacroExpander._expand(MacroExpander.java:143)
>> > at
>> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:93)
>> > at
>> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:59)
>> > at
>> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:45)
>> > at
>> org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:157)
>> > at
>> org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:172)
>> > at
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:152)
>> > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2102)
>> > at
>> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>> > at
>> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>> > at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>> > at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>> > at
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>> > at
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>> > at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>> > at
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>> > at
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>> > at
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>> > at
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>> > at
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>> > at
>> 

[jira] [Commented] (SOLR-7878) Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric fields)

2017-06-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049348#comment-16049348
 ] 

Hoss Man commented on SOLR-7878:


bq. ...a new {{FacetFieldProcessorByArray}} subclass...

...Or perhaps, if FacetFieldProcessorByArray assumes we're always dealing with 
ordinals, a completely new subclass of {{FacetFieldProcessor}} to deal directly 
with the DocValue "values" ?

> Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric 
> fields)
> --
>
> Key: SOLR-7878
> URL: https://issues.apache.org/jira/browse/SOLR-7878
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: David Smiley
>
> Lucene has a SortedNumericDocValues (i.e. multi-valued numeric DocValues), 
> ever since late in the 4x versions.  Solr's TrieField.createFields 
> unfortunately still uses SortedSetDocValues for the multi-valued case.  
> SortedNumericDocValues is more efficient than SortedSetDocValues; for example 
> there is no 'ordinal' mapping for sorting/faceting needed.  
> Unfortunately, updating Solr here would be quite a bit of work, since there 
> are backwards-compatibility concerns, and faceting code would need a new code 
> path implementation just for this.  Sorting is relatively simple thanks to 
> SortedNumericSortField, and today multi-valued sorting isn't directly 
> possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7878) Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric fields)

2017-06-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049342#comment-16049342
 ] 

Hoss Man commented on SOLR-7878:


I'm not to familar with the code in question but...

# IIUC you're suggesting a (cached) wrapper around SortedNumericDocValues that 
would require converting at least a couple 
numeric->(legacy-numeric)byte->numeric conversions to be used as a drop in 
substitute for SorterSetDocValues ... is that really something that would be 
eaiser/better then creating a new {{FacetFieldProcessorByArray}} subclass 
specifically for dealing with the SortedNumericDocValues ?
# shouldn't this discussion be in SOLR-9989 ... frankly i'm not clear on why 
SOLR-7878 is still open since PointFields already use NumericDocValues, it 
seems unlikely we'll want to keep TrieFields around long enough to bother with 
trying to make it possible/optional to use NumericDocValues with TrieFields.

> Use SortedNumericDocValues (efficient sort & facet on multi-valued numeric 
> fields)
> --
>
> Key: SOLR-7878
> URL: https://issues.apache.org/jira/browse/SOLR-7878
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: David Smiley
>
> Lucene has a SortedNumericDocValues (i.e. multi-valued numeric DocValues), 
> ever since late in the 4x versions.  Solr's TrieField.createFields 
> unfortunately still uses SortedSetDocValues for the multi-valued case.  
> SortedNumericDocValues is more efficient than SortedSetDocValues; for example 
> there is no 'ordinal' mapping for sorting/faceting needed.  
> Unfortunately, updating Solr here would be quite a bit of work, since there 
> are backwards-compatibility concerns, and faceting code would need a new code 
> path implementation just for this.  Sorting is relatively simple thanks to 
> SortedNumericSortField, and today multi-valued sorting isn't directly 
> possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7878) QueryParser AND default operator and MultiWords synonyms failed if keywords exactly matches a synonym

2017-06-14 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-7878:
-
Attachment: LUCENE-7878.patch

Here is a patch. The logic to simplify the boolean query produced by the graph 
analysis was wrong.

> QueryParser AND default operator and MultiWords synonyms failed if keywords 
> exactly matches a synonym
> -
>
> Key: LUCENE-7878
> URL: https://issues.apache.org/jira/browse/LUCENE-7878
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: master (7.0), 6.6, 6.5.1
> Environment: 1.8.0_131-b11 on Mac OS X
>Reporter: Emmanuel Keller
>  Labels: multi-word, query-parser, synonyms
> Attachments: LUCENE-7878.patch
>
>
> This issue is about using the QueryParser with MultiWordsSynonyms.
> To reproduce the bug:
> - Use AND as default operator
> - Use a query string which exactly matches one synonym.
> In short, the part of the query which handle the synonym lookup should keep a 
> "OR" relation between the synonyms, but it is translated as a "AND".
> If I parse: "guinea pig" which is a synonym of "cavy":
> Using default OR, I get something correct:
> "(+guinea +pig) cavy"
> Note: I should probably better have ((+guinea +pic) cavy)
> Using AND as default operator, I get something wrong:
> +(+guinea +pig) +cavy
> I was expected:
> +((+guinea +pig) cavy)
> The relation between "guinea pig" and "cavy" is now a AND. It should be still 
> a OR because it is a synonym clause.
> To help understanding. If now I parse "guinea pig world"
> And I get the expected result:
> +((+guinea +pig) cavy) +world
> The relation between "guinea pig" and "cavy" is a OR as expected (it is a 
> synonym), and the relation with "world" is AND as expected by the default 
> operator.
> Here is the additional unit test for, I hope it is pretty self-explanatory:
> org.apache.lucene.queryparser.classic.TestQueryParser
> {code:java}
> public void testDefaultOperatorWhenKeywordsMatchesExactlyOneSynonym() throws 
> ParseException {
> // Using the default OR operator
> QueryParser smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.OR);
> assertEquals("(+guinea +pig) cavy", smart.parse("guinea 
> pig").toString("field"));
> // Using the default AND operator
> smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.AND);
> assertEquals("+((+guinea +pig) cavy) +world", smart.parse("guinea pig 
> world").toString("field"));
> // Using the default AND operator
> smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.AND);
> assertEquals("+((+guinea +pig) cavy)", smart.parse("guinea 
> pig").toString("field"));
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10889) Stale zookeper information is used during failover check

2017-06-14 Thread Mihaly Toth (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mihaly Toth updated SOLR-10889:
---
Attachment: SOLR-10889.patch

Here is the unit test and the implementation (first one is bigger)
* Time is "mocked" out: interface introduced for getting nanoseconds. In test 
it is overwritten.
* Each doWork loop is invoked separately from test, forever looping is not used
* Hamcrest matchers for collection asserts
* updateExecutor basically executes the code in the same Thread context, no 
problems in waiting for background thread to complete
* Core Create Requests are not actually executed, just collected into a list, 
and verified from test
 
Comments are welcome.

> Stale zookeper information is used during failover check
> 
>
> Key: SOLR-10889
> URL: https://issues.apache.org/jira/browse/SOLR-10889
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Mihaly Toth
> Attachments: SOLR-10889.patch
>
>
> In {{OverseerAutoReplicaFailoverThread}} it goes over each and every replica 
> to check if it needs to be reloaded on a new node. In each such round it 
> reads cluster state just in the beginning. Especially in case of big 
> clusters, cluster state may change during the process of iterating through 
> the replicas. As a result false decisions may be made: restarting a healthy 
> core, or not handling a bad node.
> The code fragment in question:
> {code}
> for (Slice slice : slices) {
>   if (slice.getState() == Slice.State.ACTIVE) {
> final Collection downReplicas = new 
> ArrayList();
> int goodReplicas = findDownReplicasInSlice(clusterState, 
> docCollection, slice, downReplicas);
> {code}
> The solution seems rather straightforward, reading the state every time:
> {code}
> int goodReplicas = 
> findDownReplicasInSlice(zkStateReader.getClusterState(), docCollection, 
> slice, downReplicas);
> {code}
> The only counter argument that comes into my mind is too frequent reading of 
> the cluster state. We can enhance this naive solution so that re-reading is 
> done only if a bad node is found. But I am not sure if such a read 
> optimization is necessary.
> I have done some unit tests around this class, mocking out even the time 
> factor. It runs in a second. I am interested in getting feedback about such 
> an approach. I will upload a patch with this shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 922 - Unstable

2017-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/922/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:153)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:110)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:108)
  at sun.reflect.GeneratedConstructorAccessor144.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:760)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:822)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1088)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:947)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:930)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:565)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:153)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:108)
at sun.reflect.GeneratedConstructorAccessor144.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:760)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:822)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1088)
at org.apache.solr.core.SolrCore.(SolrCore.java:947)
at org.apache.solr.core.SolrCore.(SolrCore.java:830)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:930)
at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:565)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([E3206407E943821A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:302)
at sun.reflect.GeneratedMethodAccessor59.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7878) QueryParser AND default operator and MultiWords synonyms failed if keywords exactly matches a synonym

2017-06-14 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049318#comment-16049318
 ] 

Jim Ferenczi commented on LUCENE-7878:
--

Thanks  Emanuel ! 
I confirm that this is a bug and that the bug only occurs when there is a 
single multi-words synonym in the query. In that case the should clause that is 
supposed to handle the synonym rule is removed by the query parser, I'll work 
on a fix.
 

> QueryParser AND default operator and MultiWords synonyms failed if keywords 
> exactly matches a synonym
> -
>
> Key: LUCENE-7878
> URL: https://issues.apache.org/jira/browse/LUCENE-7878
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: master (7.0), 6.6, 6.5.1
> Environment: 1.8.0_131-b11 on Mac OS X
>Reporter: Emmanuel Keller
>  Labels: multi-word, query-parser, synonyms
>
> This issue is about using the QueryParser with MultiWordsSynonyms.
> To reproduce the bug:
> - Use AND as default operator
> - Use a query string which exactly matches one synonym.
> In short, the part of the query which handle the synonym lookup should keep a 
> "OR" relation between the synonyms, but it is translated as a "AND".
> If I parse: "guinea pig" which is a synonym of "cavy":
> Using default OR, I get something correct:
> "(+guinea +pig) cavy"
> Note: I should probably better have ((+guinea +pic) cavy)
> Using AND as default operator, I get something wrong:
> +(+guinea +pig) +cavy
> I was expected:
> +((+guinea +pig) cavy)
> The relation between "guinea pig" and "cavy" is now a AND. It should be still 
> a OR because it is a synonym clause.
> To help understanding. If now I parse "guinea pig world"
> And I get the expected result:
> +((+guinea +pig) cavy) +world
> The relation between "guinea pig" and "cavy" is a OR as expected (it is a 
> synonym), and the relation with "world" is AND as expected by the default 
> operator.
> Here is the additional unit test for, I hope it is pretty self-explanatory:
> org.apache.lucene.queryparser.classic.TestQueryParser
> {code:java}
> public void testDefaultOperatorWhenKeywordsMatchesExactlyOneSynonym() throws 
> ParseException {
> // Using the default OR operator
> QueryParser smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.OR);
> assertEquals("(+guinea +pig) cavy", smart.parse("guinea 
> pig").toString("field"));
> // Using the default AND operator
> smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.AND);
> assertEquals("+((+guinea +pig) cavy) +world", smart.parse("guinea pig 
> world").toString("field"));
> // Using the default AND operator
> smart = new QueryParser("field", new Analyzer1());
> smart.setSplitOnWhitespace(false);
> smart.setDefaultOperator(Operator.AND);
> assertEquals("+((+guinea +pig) cavy)", smart.parse("guinea 
> pig").toString("field"));
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10889) Stale zookeper information is used during failover check

2017-06-14 Thread Mihaly Toth (JIRA)
Mihaly Toth created SOLR-10889:
--

 Summary: Stale zookeper information is used during failover check
 Key: SOLR-10889
 URL: https://issues.apache.org/jira/browse/SOLR-10889
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (7.0)
Reporter: Mihaly Toth


In {{OverseerAutoReplicaFailoverThread}} it goes over each and every replica to 
check if it needs to be reloaded on a new node. In each such round it reads 
cluster state just in the beginning. Especially in case of big clusters, 
cluster state may change during the process of iterating through the replicas. 
As a result false decisions may be made: restarting a healthy core, or not 
handling a bad node.

The code fragment in question:
{code}
for (Slice slice : slices) {
  if (slice.getState() == Slice.State.ACTIVE) {
final Collection downReplicas = new 
ArrayList();
int goodReplicas = findDownReplicasInSlice(clusterState, 
docCollection, slice, downReplicas);
{code}

The solution seems rather straightforward, reading the state every time:
{code}
int goodReplicas = 
findDownReplicasInSlice(zkStateReader.getClusterState(), docCollection, slice, 
downReplicas);
{code}

The only counter argument that comes into my mind is too frequent reading of 
the cluster state. We can enhance this naive solution so that re-reading is 
done only if a bad node is found. But I am not sure if such a read optimization 
is necessary.

I have done some unit tests around this class, mocking out even the time 
factor. It runs in a second. I am interested in getting feedback about such an 
approach. I will upload a patch with this shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049300#comment-16049300
 ] 

Erick Erickson commented on SOLR-10574:
---

I'll add a yes to managed schema having an xml extension. Agree make it a 
separate issue.

Catch-all _text_ field: yes. Enabled by default: yes with warning.

Since this is not for production anyway, might as well make it as easy as 
possible to get started. If we're going to enable data_driven, we should have a 
catch-all field enabled by default. Neither one is something I'd recommend 
going to production with without close examination.

So to me it's a "both or neither" preference. The point of having data_driven 
as the default is to lower first-time barriers to entry. If the catch-all field 
is there and it's the pre-configured "df" for the request handlers people get 
results the first time they index and search without even knowing they have 
fields in their documents. Otherwise they're left scratching their heads 
because they indexed stuff but didn't find anything.

So we'd then tell them "Examine your index to see what fields were actually 
defined, and do fielded search ('cause they don't even necessarily know what 
the docs look like!). Or enable a catch-all field and re-index", which is a 
minimal improvement in first-time experience over what we have now, at least 
they were able to index docs if not successfully search them the first time 
they tried.

Perhaps the warning (in the schema file and in startup guides or maybe "taking 
Solr to production") is something akin to "add-unknown-fields-to-the-schema and 
the default behavior of copying all fields to _text_ are options intended for 
getting started. Production systems rarely enable either of these two options. 
See solrconfig.xml and managed-schema(.xml) for the text 'RARELY ENABLED FOR 
PRODUCTION' ". Or something like that.

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+173) - Build # 3746 - Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3746/
Java: 64bit/jdk-9-ea+173 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([E612FBBE820EE7F9:6D35286FC3084C7D]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:437)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+173) - Build # 19865 - Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19865/
Java: 64bit/jdk-9-ea+173 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteShard

Error Message:
Error from server at http://127.0.0.1:41467/solr: deleteshard the collection 
time out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41467/solr: deleteshard the collection time 
out:180s
at 
__randomizedtesting.SeedInfo.seed([33A77E16E45B5B50:F6F9CA0A62AE91AD]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:592)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:219)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:459)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:389)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1130)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:177)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteShard(CollectionsAPISolrJTest.java:143)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4073 - Still Unstable!

2017-06-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4073/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([B7FF32F40D4EFD3F:D592CCB5C2C09D01]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at 
org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter(MetricsHandlerTest.java:201)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 

[jira] [Updated] (SOLR-10888) almost self-generating python script(s) to access V2 APIs

2017-06-14 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-10888:
---
Attachment: SOLR-10888.patch

Attaching illustrative patch, sample output below.

top-level usage
{code}
prompt:solr$ bin/v2-collections-api.py
usage: v2-collections-api.py [-h]
 
{restore-collection,create,create-alias,backup-collection,delete-alias}
 ...
v2-collections-api.py: error: too few arguments
{code}

per-command usage
{code}
prompt:solr$ bin/v2-collections-api.py create
usage: v2-collections-api.py create [-h]
[--replicationFactor REPLICATIONFACTOR]
[--async ASYNC]
[--tlogReplicas TLOGREPLICAS]
[--maxShardsPerNode MAXSHARDSPERNODE]
[--shards SHARDS]
[--shuffleNodes SHUFFLENODES]
[--nrtReplicas NRTREPLICAS]
[--numShards NUMSHARDS]
[--autoAddReplicas AUTOADDREPLICAS]
[--config CONFIG]
[--pullReplicas PULLREPLICAS]
name
v2-collections-api.py create: error: too few arguments
{code}

detailed per-command usage
{code}
prompt:solr$ bin/v2-collections-api.py create -h
usage: v2-collections-api.py create [-h]
[--replicationFactor REPLICATIONFACTOR]
[--async ASYNC]
[--tlogReplicas TLOGREPLICAS]
[--maxShardsPerNode MAXSHARDSPERNODE]
[--shards SHARDS]
[--shuffleNodes SHUFFLENODES]
[--nrtReplicas NRTREPLICAS]
[--numShards NUMSHARDS]
[--autoAddReplicas AUTOADDREPLICAS]
[--config CONFIG]
[--pullReplicas PULLREPLICAS]
name

positional arguments:
  name  The name of the collection to be created.

optional arguments:
  -h, --helpshow this help message and exit
  --replicationFactor REPLICATIONFACTOR
The number of NRT replicas to be created for each
shard. Replicas are physical copies of each shard,
acting as failover for the shard.
  --async ASYNC Defines a request ID that can be used to track this
action after it's submitted. The action will be
processed asynchronously.
  --tlogReplicas TLOGREPLICAS
...
{code}

illustrative run:
{code}
prompot:solr$ bin/v2-collections-api.py create myDemoCollection --shards=42
url = 
http://localhost:8983/v2/collections/create?name=myDemoCollection=42
{code}

> almost self-generating python script(s) to access V2 APIs
> -
>
> Key: SOLR-10888
> URL: https://issues.apache.org/jira/browse/SOLR-10888
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10888.patch
>
>
> The V2 API supports introspection and the results of such introspection(s) 
> can be used to automatically on-the-fly generate a (nested) 
> {{argparse.ArgumentParser}} in a python script and then to again 
> automatically transform the script arguments into a url and http call to the 
> V2 API.
> Illustrative patch and sample output to follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10888) almost self-generating python script(s) to access V2 APIs

2017-06-14 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-10888:
--

 Summary: almost self-generating python script(s) to access V2 APIs
 Key: SOLR-10888
 URL: https://issues.apache.org/jira/browse/SOLR-10888
 Project: Solr
  Issue Type: New Feature
Reporter: Christine Poerschke
Priority: Minor


The V2 API supports introspection and the results of such introspection(s) can 
be used to automatically on-the-fly generate a (nested) 
{{argparse.ArgumentParser}} in a python script and then to again automatically 
transform the script arguments into a url and http call to the V2 API.

Illustrative patch and sample output to follow.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10887) Add .xml extension to managed-schema

2017-06-14 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-10887:
---

 Summary: Add .xml extension to managed-schema
 Key: SOLR-10887
 URL: https://issues.apache.org/jira/browse/SOLR-10887
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya
Priority: Blocker
 Fix For: master (7.0)


Discussions here SOLR-10574.
There is consensus to renaming managed-schema back to managed-schema.xml. 
Requires backcompat handling as mentioned in Yonik's comment:

{code}
there is back compat to consider. I'd also prefer that if it get changed, we 
first look for "managed-schema.xml", then "managed-schema", and then 
"schema.xml" to preserve back compat.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049195#comment-16049195
 ] 

Ishan Chattopadhyaya edited comment on SOLR-10574 at 6/14/17 1:52 PM:
--

I'll add a patch with data-driven enabled by default, catch all field present 
(but not used). I'd prefer if we did the managed-schema to managed-schema.xml 
change as -a separate issue- SOLR-10887 (since it requires backcompat handling, 
and I don't want to complicate this issue).


was (Author: ichattopadhyaya):
I'll add a patch with data-driven enabled by default, catch all field present 
(but not used). I'd prefer if we did the managed-schema to managed-schema.xml 
change as a separate issue (since it requires backcompat handling, and I don't 
want to complicate this issue).

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049195#comment-16049195
 ] 

Ishan Chattopadhyaya commented on SOLR-10574:
-

I'll add a patch with data-driven enabled by default, catch all field present 
(but not used). I'd prefer if we did the managed-schema to managed-schema.xml 
change as a separate issue (since it requires backcompat handling, and I don't 
want to complicate this issue).

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr macro expansion -- interfering with ManifoldCF posts?

2017-06-14 Thread Karl Wright
Hi Erik,

I only have snippets of logs from the user so my details are limited.  I
could request the parameter value for you if you like.  But since it's
coming out of some large Documentum repository, pretty much anything is
possible. ;-)  Documentum may have attributes that also use similar
escaping for macros AFAICT.

Karl


On Wed, Jun 14, 2017 at 9:38 AM, Erik Hatcher 
wrote:

> Karl -
>
> There’s expandMacros=false, as covered here: https://cwiki.apache.org/
> confluence/display/solr/Parameter+Substitution
>
> But… what exactly is being sent to Solr?Is there some kind of “${…”
> being sent as a parameter?   Just curious what’s getting you into this in
> the first place.   But disabling probably is your most desired solution.
>
> Erik
>
>
> > On Jun 14, 2017, at 9:34 AM, Karl Wright  wrote:
> >
> > Hi all,
> >
> > I've got a ManifoldCF user who is posting content to Solr using the MCF
> Solr output connector.  This connector uses SolrJ under the covers -- a
> fairly recent version -- but also has overridden some classes to insure
> that multipart form posts will be used for most content.
> >
> > The problem is that, for a specific document, the user is getting an
> ArrayIndexOutOfBounds exception in Solr, as follows:
> >
> > >>
> > 2017-06-14T08:25:16,546 - ERROR [qtp862890654-69725:SolrException@148]
> - {collection=c:documentum_manifoldcf_stg, 
> core=x:documentum_manifoldcf_stg_shard1_replica1,
> node_name=n:**:8983_solr, replica=r:core_node1, shard=s:shard1} -
> java.lang.StringIndexOutOfBoundsException: String index out of range: -296
> > at java.lang.String.substring(String.java:1911)
> > at org.apache.solr.request.macro.MacroExpander._expand(
> MacroExpander.java:143)
> > at org.apache.solr.request.macro.MacroExpander.expand(
> MacroExpander.java:93)
> > at org.apache.solr.request.macro.MacroExpander.expand(
> MacroExpander.java:59)
> > at org.apache.solr.request.macro.MacroExpander.expand(
> MacroExpander.java:45)
> > at org.apache.solr.request.json.RequestUtil.processParams(
> RequestUtil.java:157)
> > at org.apache.solr.util.SolrPluginUtils.setDefaults(
> SolrPluginUtils.java:172)
> > at org.apache.solr.handler.RequestHandlerBase.handleRequest(
> RequestHandlerBase.java:152)
> > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2102)
> > at org.apache.solr.servlet.HttpSolrCall.execute(
> HttpSolrCall.java:654)
> > at org.apache.solr.servlet.HttpSolrCall.call(
> HttpSolrCall.java:460)
> > at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:257)
> > at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:208)
> > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> doFilter(ServletHandler.java:1652)
> > at org.eclipse.jetty.servlet.ServletHandler.doHandle(
> ServletHandler.java:585)
> > at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:143)
> > at org.eclipse.jetty.security.SecurityHandler.handle(
> SecurityHandler.java:577)
> > at org.eclipse.jetty.server.session.SessionHandler.
> doHandle(SessionHandler.java:223)
> > at org.eclipse.jetty.server.handler.ContextHandler.
> doHandle(ContextHandler.java:1127)
> > at org.eclipse.jetty.servlet.ServletHandler.doScope(
> ServletHandler.java:515)
> > at org.eclipse.jetty.server.session.SessionHandler.
> doScope(SessionHandler.java:185)
> > at org.eclipse.jetty.server.handler.ContextHandler.
> doScope(ContextHandler.java:1061)
> > at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:141)
> > at org.eclipse.jetty.server.handler.ContextHandlerCollection.
> handle(ContextHandlerCollection.java:215)
> > at org.eclipse.jetty.server.handler.HandlerCollection.
> handle(HandlerCollection.java:110)
> > at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:97)
> > at org.eclipse.jetty.server.Server.handle(Server.java:499)
> > at org.eclipse.jetty.server.HttpChannel.handle(
> HttpChannel.java:310)
> > at org.eclipse.jetty.server.HttpConnection.onFillable(
> HttpConnection.java:257)
> > at org.eclipse.jetty.io.AbstractConnection$2.run(
> AbstractConnection.java:540)
> > at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> QueuedThreadPool.java:635)
> > at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(
> QueuedThreadPool.java:555)
> > at java.lang.Thread.run(Thread.java:745)
> > <<
> >
> > It looks worrisome to me that there's now possibly some kind of "macro
> expansion" that is being triggered within parameters being sent to Solr.
> Can anyone tell me either how to (a) disable this feature, or (b) how the
> MCF Solr output connector should escape parameters being posted so 

[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-14 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049193#comment-16049193
 ] 

Ishan Chattopadhyaya commented on SOLR-10574:
-

So far, here's what I've summarized from comments above. Please correct me if I 
understood your position incorrectly.

Data driven enabled/disabled by default

{code}
Ishan Chattopadhyayaenabled
David Smileyboth: disabled is fine, enabled is fine with adequete 
warning
Shawn Heiseydisabled
Jan Hoydahl enabled
Erick Erickson  enabled with warning
Noble Paul  disabled
Yonik Seeleydisabled (but no strong preference?)

Disabled - 3.5
Enabled - 3.5

Decision: Split (until someone pitches in or changes vote)
{code}

managed-schema should have .xml extension?

{code}
Ishan Chattopadhyayano
Varun Thacker   yes
Alexandre Rafalovichyes (judging by comments)
Jan Hoydahl yes
Yonik Seeleyyes

Decision: .xml should be back, with backcompat handling
{code}

Catch all _text_ to be used as copy field target by default?
{code}
Yonik Seeleyto be present in schema:yes used by 
default:no
Jan Hoydahl to be present in schema:yes used by 
default:no

Decision: (discussions are inconclusive yet)
{code}

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr macro expansion -- interfering with ManifoldCF posts?

2017-06-14 Thread Erik Hatcher
Karl -

There’s expandMacros=false, as covered here: 
https://cwiki.apache.org/confluence/display/solr/Parameter+Substitution

But… what exactly is being sent to Solr?Is there some kind of “${…” being 
sent as a parameter?   Just curious what’s getting you into this in the first 
place.   But disabling probably is your most desired solution.

Erik


> On Jun 14, 2017, at 9:34 AM, Karl Wright  wrote:
> 
> Hi all,
> 
> I've got a ManifoldCF user who is posting content to Solr using the MCF Solr 
> output connector.  This connector uses SolrJ under the covers -- a fairly 
> recent version -- but also has overridden some classes to insure that 
> multipart form posts will be used for most content.
> 
> The problem is that, for a specific document, the user is getting an 
> ArrayIndexOutOfBounds exception in Solr, as follows:
> 
> >>
> 2017-06-14T08:25:16,546 - ERROR [qtp862890654-69725:SolrException@148] - 
> {collection=c:documentum_manifoldcf_stg, 
> core=x:documentum_manifoldcf_stg_shard1_replica1, 
> node_name=n:**:8983_solr, replica=r:core_node1, shard=s:shard1} - 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -296
> at java.lang.String.substring(String.java:1911)
> at 
> org.apache.solr.request.macro.MacroExpander._expand(MacroExpander.java:143)
> at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:93)
> at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:59)
> at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:45)
> at 
> org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:157)
> at 
> org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:172)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:152)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2102)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> <<
> 
> It looks worrisome to me that there's now possibly some kind of "macro 
> expansion" that is being triggered within parameters being sent to Solr.  Can 
> anyone tell me either how to (a) disable this feature, or (b) how the MCF 
> Solr output connector should escape parameters being posted so that Solr does 
> not attempt any macro expansion?  If the latter, I also need to know when 
> this feature appeared, since obviously whether or not to do the escaping will 
> depend on the precise version of the Solr instance involved.
> 
> I'm also quite concerned that considerations of backwards compatibility may 
> have been lost at some point with Solr, since heretofore I could count on 
> older versions of SolrJ working with newer versions of Solr.  Please clarify 
> what the current policy is
> 

Solr macro expansion -- interfering with ManifoldCF posts?

2017-06-14 Thread Karl Wright
Hi all,

I've got a ManifoldCF user who is posting content to Solr using the MCF
Solr output connector.  This connector uses SolrJ under the covers -- a
fairly recent version -- but also has overridden some classes to insure
that multipart form posts will be used for most content.

The problem is that, for a specific document, the user is getting an
ArrayIndexOutOfBounds exception in Solr, as follows:

>>
2017-06-14T08:25:16,546 - ERROR [qtp862890654-69725:SolrException@148] -
{collection=c:documentum_manifoldcf_stg,
core=x:documentum_manifoldcf_stg_shard1_replica1,
node_name=n:**:8983_solr, replica=r:core_node1, shard=s:shard1} -
java.lang.StringIndexOutOfBoundsException: String index out of range: -296
at java.lang.String.substring(String.java:1911)
at
org.apache.solr.request.macro.MacroExpander._expand(MacroExpander.java:143)
at
org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:93)
at
org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:59)
at
org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:45)
at
org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:157)
at
org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:172)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:152)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2102)
at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
<<

It looks worrisome to me that there's now possibly some kind of "macro
expansion" that is being triggered within parameters being sent to Solr.
Can anyone tell me either how to (a) disable this feature, or (b) how the
MCF Solr output connector should escape parameters being posted so that
Solr does not attempt any macro expansion?  If the latter, I also need to
know when this feature appeared, since obviously whether or not to do the
escaping will depend on the precise version of the Solr instance involved.

I'm also quite concerned that considerations of backwards compatibility may
have been lost at some point with Solr, since heretofore I could count on
older versions of SolrJ working with newer versions of Solr.  Please
clarify what the current policy is


Thanks,
Karl


[jira] [Commented] (SOLR-10874) FloatPayloadValueSource throws assertion error if debug=true

2017-06-14 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049179#comment-16049179
 ] 

Erik Hatcher commented on SOLR-10874:
-

bq. Maybe this scenario only happens in very specific circumstances?

What are your circumstances outside of this test case?   I'd just like to 
experience it in the wild - I'll get the fix committed, ideally before for a 
6.6.1 or 6.7.   I tried with a single doc indexed to see if it had to do with 
only one document, but still didn't fail to work as expected though the test 
case definitely fails - my bad.

> FloatPayloadValueSource throws assertion error if debug=true
> 
>
> Key: SOLR-10874
> URL: https://issues.apache.org/jira/browse/SOLR-10874
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Michael Kosten
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10874.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Using the new payload function will fail with an assertion error if the debug 
> parameter is included in the query. This is caused by the floatValue method 
> in FloatPayloadValueSource being called for the same doc id twice in a row.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10874) FloatPayloadValueSource throws assertion error if debug=true

2017-06-14 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-10874:
---

Assignee: Erik Hatcher

> FloatPayloadValueSource throws assertion error if debug=true
> 
>
> Key: SOLR-10874
> URL: https://issues.apache.org/jira/browse/SOLR-10874
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Michael Kosten
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10874.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Using the new payload function will fail with an assertion error if the debug 
> parameter is included in the query. This is caused by the floatValue method 
> in FloatPayloadValueSource being called for the same doc id twice in a row.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10874) FloatPayloadValueSource throws assertion error if debug=true

2017-06-14 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-10874:

Fix Version/s: 6.6.1
   6.7
   master (7.0)

> FloatPayloadValueSource throws assertion error if debug=true
> 
>
> Key: SOLR-10874
> URL: https://issues.apache.org/jira/browse/SOLR-10874
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Michael Kosten
>Priority: Minor
> Fix For: master (7.0), 6.7, 6.6.1
>
> Attachments: SOLR-10874.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Using the new payload function will fail with an assertion error if the debug 
> parameter is included in the query. This is caused by the floatValue method 
> in FloatPayloadValueSource being called for the same doc id twice in a row.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9565) Make every UpdateRequestProcessor available implicitly

2017-06-14 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049172#comment-16049172
 ] 

Noble Paul commented on SOLR-9565:
--

The URP chain is a huge ball and chain when all that you need to do is modify 
an input document. Just think of it as a feature of /update and things become 
much simpler.

If you wish to configure more complex URPs , please use the request params 
feature to create the param set and reuse it over multiple requests.


> Make every UpdateRequestProcessor available implicitly
> --
>
> Key: SOLR-9565
> URL: https://issues.apache.org/jira/browse/SOLR-9565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> Now that we can 'construct' the URP chains through request parameters, we 
> should make all the URPs available automatically. The next challenge is to 
> make them read the configuration from request parameters as well
> to access {{HTMLStripFieldUpdateProcessorFactory}} the parameter could be 
> {{processor=HTMLStripField}} (The UpdateProcessorFactory part is 
> automatically appended )
> The next step is to make the URPs accept request parameters instead of just 
> configuration parameters e.g: 
> {{processor=HTMLStripField=}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9565) Make every UpdateRequestProcessor available implicitly

2017-06-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049126#comment-16049126
 ] 

Jan Høydahl commented on SOLR-9565:
---

You can already call Config API with {{add-updateprocessor}}, see 
http://lucene.apache.org/solr/guide/6_6/config-api.html#ConfigAPI-Whatabout_updateRequestProcessorChain_
 and then use the name you selected in {{=myName}}, so perhaps this 
is good enough even if it requires two steps?

About making all URPs available implicitly I think that SPI can do much of the 
heavy-lifting here and then the {{add-updateprocessor}} can accept 
{{"id":"urp-spi-name"}} as alternative to {{"class":"solr.MyURPFactory"}}

> Make every UpdateRequestProcessor available implicitly
> --
>
> Key: SOLR-9565
> URL: https://issues.apache.org/jira/browse/SOLR-9565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> Now that we can 'construct' the URP chains through request parameters, we 
> should make all the URPs available automatically. The next challenge is to 
> make them read the configuration from request parameters as well
> to access {{HTMLStripFieldUpdateProcessorFactory}} the parameter could be 
> {{processor=HTMLStripField}} (The UpdateProcessorFactory part is 
> automatically appended )
> The next step is to make the URPs accept request parameters instead of just 
> configuration parameters e.g: 
> {{processor=HTMLStripField=}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9824) Documents indexed in bulk are replicated using too many HTTP requests

2017-06-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049124#comment-16049124
 ] 

David Smiley commented on SOLR-9824:


+1 I will.

> Documents indexed in bulk are replicated using too many HTTP requests
> -
>
> Key: SOLR-9824
> URL: https://issues.apache.org/jira/browse/SOLR-9824
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.3
>Reporter: David Smiley
>Assignee: Mark Miller
> Attachments: SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch, 
> SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch, 
> SOLR-9824-tflobbe.patch
>
>
> This takes awhile to explain; bear with me. While working on bulk indexing 
> small documents, I looked at the logs of my SolrCloud nodes.  I noticed that 
> shards would see an /update log message every ~6ms which is *way* too much.  
> These are requests from one shard (that isn't a leader/replica for these docs 
> but the recipient from my client) to the target shard leader (no additional 
> replicas).  One might ask why I'm not sending docs to the right shard in the 
> first place; I have a reason but it's besides the point -- there's a real 
> Solr perf problem here and this probably applies equally to 
> replicationFactor>1 situations too.  I could turn off the logs but that would 
> hide useful stuff, and it's disconcerting to me that so many short-lived HTTP 
> requests are happening, somehow at the bequest of DistributedUpdateProcessor. 
>  After lots of analysis and debugging and hair pulling, I finally figured it 
> out.  
> In SOLR-7333 ([~tpot]) introduced an optimization called 
> {{UpdateRequest.isLastDocInBatch()}} in which ConcurrentUpdateSolrClient will 
> poll with a '0' timeout to the internal queue, so that it can close the 
> connection without it hanging around any longer than needed.  This part makes 
> sense to me.  Currently the only spot that has the smarts to set this flag is 
> {{JavaBinUpdateRequestCodec.unmarshal.readOuterMostDocIterator()}} at the 
> last document.  So if a shard received docs in a javabin stream (but not 
> other formats) one would expect the _last_ document to have this flag.  
> There's even a test.  Docs without this flag get the default poll time; for 
> javabin it's 25ms.  Okay.
> I _suspect_ that if someone used CloudSolrClient or HttpSolrClient to send 
> javabin data in a batch, the intended efficiencies of SOLR-7333 would apply.  
> I didn't try. In my case, I'm using ConcurrentUpdateSolrClient (and BTW 
> DistributedUpdateProcessor uses CUSC too).  CUSC uses the RequestWriter 
> (defaulting to javabin) to send each document separately without any leading 
> marker or trailing marker.  For the XML format by comparison, there is a 
> leading and trailing marker ( ... ).  Since there's no outer 
> container for the javabin unmarshalling to detect the last document, it marks 
> _every_ document as {{req.lastDocInBatch()}}!  Ouch!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >