[jira] [Commented] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069542#comment-16069542
 ] 

Andrew Lundgren commented on SOLR-10981:


Not in this case. They are stored off box in s3.  They are loaded via a URL.

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 4.10.4, 6.6, master (7.0)
>
> Attachments: SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10814) Solr RuleBasedAuthorization config doesn't work seamlessly with kerberos authentication

2017-06-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069520#comment-16069520
 ] 

Noble Paul commented on SOLR-10814:
---

[~hgadre] Yes, in case of Kerberos {{f...@example.com}} has to be specified in 
the user role mapping. So what is wrong with that ? How do you want it to be 
changed? do you wish to use just {{foo}} instead of  {{u...@example.com}}?

> Solr RuleBasedAuthorization config doesn't work seamlessly with kerberos 
> authentication
> ---
>
> Key: SOLR-10814
> URL: https://issues.apache.org/jira/browse/SOLR-10814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Hrishikesh Gadre
>
> Solr allows configuring roles to control user access to the system. This is 
> accomplished through rule-based permission definitions which are assigned to 
> users.
> The authorization framework in Solr passes the information about the request 
> (to be authorized) using an instance of AuthorizationContext class. Currently 
> the only way to extract authenticated user is via getUserPrincipal() method 
> which returns an instance of java.security.Principal class. The 
> RuleBasedAuthorizationPlugin implementation invokes getName() method on the 
> Principal instance to fetch the list of associated roles.
> https://github.com/apache/lucene-solr/blob/2271e73e763b17f971731f6f69d6ffe46c40b944/solr/core/src/java/org/apache/solr/security/RuleBasedAuthorizationPlugin.java#L156
> In case of basic authentication mechanism, the principal is the userName. 
> Hence it works fine. But in case of kerberos authentication, the user 
> principal also contains the RELM information e.g. instead of foo, it would 
> return f...@example.com. This means if the user changes the authentication 
> mechanism, he would also need to change the user-role mapping in 
> authorization section to use f...@example.com instead of foo. This is not 
> good from usability perspective.   



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-06-29 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069503#comment-16069503
 ] 

Scott Blum commented on SOLR-10983:
---

[~shalinmangar] [~jhump]

> Fix DOWNNODE -> queue-work explosion
> 
>
> Key: SOLR-10983
> URL: https://issues.apache.org/jira/browse/SOLR-10983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
> Attachments: SOLR-10983.patch
>
>
> Every DOWNNODE command enqueues N copies of itself into queue-work, where N 
> is number of collections affected by the DOWNNODE.
> This rarely matters in practice, because queue-work gets immediately dumped-- 
> however, if anything throws an exception (such as ZK bad version), we don't 
> clear queue-work.  Then the next time through the loop we run the expensive 
> DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-06-29 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-10983:
--
Attachment: SOLR-10983.patch

> Fix DOWNNODE -> queue-work explosion
> 
>
> Key: SOLR-10983
> URL: https://issues.apache.org/jira/browse/SOLR-10983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
> Attachments: SOLR-10983.patch
>
>
> Every DOWNNODE command enqueues N copies of itself into queue-work, where N 
> is number of collections affected by the DOWNNODE.
> This rarely matters in practice, because queue-work gets immediately dumped-- 
> however, if anything throws an exception (such as ZK bad version), we don't 
> clear queue-work.  Then the next time through the loop we run the expensive 
> DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-06-29 Thread Scott Blum (JIRA)
Scott Blum created SOLR-10983:
-

 Summary: Fix DOWNNODE -> queue-work explosion
 Key: SOLR-10983
 URL: https://issues.apache.org/jira/browse/SOLR-10983
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Scott Blum
Assignee: Scott Blum


Every DOWNNODE command enqueues N copies of itself into queue-work, where N is 
number of collections affected by the DOWNNODE.

This rarely matters in practice, because queue-work gets immediately dumped-- 
however, if anything throws an exception (such as ZK bad version), we don't 
clear queue-work.  Then the next time through the loop we run the expensive 
DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10397) Port 'autoAddReplicas' feature to the policy rules framework and make it work with non-shared filesystems

2017-06-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069485#comment-16069485
 ] 

Noble Paul commented on SOLR-10397:
---

changes to {{Policy}} LGTM. Please a dd a relevant testcase to {{TestPolicy}} . 
Integration tests are hard to debug 

> Port 'autoAddReplicas' feature to the policy rules framework and make it work 
> with non-shared filesystems
> -
>
> Key: SOLR-10397
> URL: https://issues.apache.org/jira/browse/SOLR-10397
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10397.1.patch, SOLR-10397.patch
>
>
> Currently 'autoAddReplicas=true' can be specified in the Collection Create 
> API to automatically add replicas when a replica becomes unavailable. I 
> propose to move this feature to the autoscaling cluster policy rules design.
> This will include the following:
> * Trigger support for ‘nodeLost’ event type
> * Modification of existing implementation of ‘autoAddReplicas’ to 
> automatically create the appropriate ‘nodeLost’ trigger.
> * Any such auto-created trigger must be marked internally such that setting 
> ‘autoAddReplicas=false’ via the Modify Collection API should delete or 
> disable corresponding trigger.
> * Support for non-HDFS filesystems while retaining the optimization afforded 
> by HDFS i.e. the replaced replica can point to the existing data dir of the 
> old replica.
> * Deprecate/remove the feature of enabling/disabling ‘autoAddReplicas’ across 
> the entire cluster using cluster properties in favor of using the 
> suspend-trigger/resume-trigger APIs.
> This will retain backward compatibility for the most part and keep a common 
> use-case easy to enable as well as make it available to more people (i.e. 
> people who don't use HDFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6807) Make handleSelect=false by default and deprecate StandardRequestHandler

2017-06-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069475#comment-16069475
 ] 

Noble Paul commented on SOLR-6807:
--

[~dsmi...@mac.com] so, you moved the steversion check up? That change looks fine

> Make handleSelect=false by default and deprecate StandardRequestHandler
> ---
>
> Key: SOLR-6807
> URL: https://issues.apache.org/jira/browse/SOLR-6807
> Project: Solr
>  Issue Type: Task
>Affects Versions: 4.10.2
>Reporter: Alexandre Rafalovitch
>Assignee: David Smiley
>Priority: Minor
>  Labels: solrconfig.xml
> Fix For: master (7.0)
>
> Attachments: 
> SOLR_6807__fix__stateVer__check_to_not_depend_on_handleSelect_setting.patch, 
> SOLR_6807_handleSelect_false.patch, SOLR_6807_handleSelect_false.patch, 
> SOLR_6807_handleSelect_false.patch, SOLR_6807_test_files.patch
>
>
> In the solrconfig.xml, we have a long explanation on the legacy 
> ** section. Since we are cleaning up 
> legacy stuff for version 5, is it safe now to flip handleSelect's default to 
> be *false* and therefore remove both the attribute and the whole section 
> explaining it?
> Then, a section in Reference Guide or even a blog post can explain what to do 
> for the old clients that still need it. But it does not seem to be needed 
> anymore for the new users. And possibly cause confusing now that we have 
> implicit, explicit and overlay handlers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069457#comment-16069457
 ] 

Ishan Chattopadhyaya commented on SOLR-10981:
-

Can the gzipped file be streamed in (using zcat)?

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 4.10.4, 6.6, master (7.0)
>
> Attachments: SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10864) Add static (test only) boolean to PointField indicating 'precisionStep' should be ignored so we can simplify points randomization in our schemas

2017-06-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069450#comment-16069450
 ] 

Tomás Fernández Löbbe commented on SOLR-10864:
--

bq. I'm not sure what you mean by "consistency with other schemas"... and 
there's nothing to stop people from adding new tests that use this schema as 
well
I meant adding fields that are explicitly trie fields, something like "tint" or 
"int_t" and modify the test to use those instead of reverting what you did.

bq. force non-points and non-dv via sys prop overrides
+1 That makes sense and it's simpler than what I was proposing.

> Add static (test only) boolean to PointField indicating 'precisionStep' 
> should be ignored so we can simplify points randomization in our schemas
> 
>
> Key: SOLR-10864
> URL: https://issues.apache.org/jira/browse/SOLR-10864
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0)
>
> Attachments: SOLR-10864.patch, SOLR-10864.patch, SOLR-10864.patch, 
> SOLR-10864.patch, SOLR-10864.patch, SOLR-10864.patch
>
>
> (I'm spinning this idea out of parent jira SOLR-10807 so that it gets it's 
> own jira# w/ it's own summary for increased visibility/comments)
> In the interest of making it easier & more straight forward to get good 
> randomized test coverage of Points fields, I'd like to add the following to 
> the {{PointField}} class...
> {code}
>  /**
>   * 
>   * The Test framework can set this global variable to instruct PointField 
> that
>   * (on init) it should be tollerant of the precisionStep 
> argument used by TrieFields.
>   * This allows for simple randomization of TrieFields and PointFields w/o 
> extensive duplication
>   * of fieldType/ declarations.
>   * 
>   *
>   * NOTE: When {@link TrieField} is removed, this boolean must also be 
> removed
>   *
>   * @lucene.internal
>   * @lucene.experimental
>   */
>  public static boolean TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS = false;
>  /** 
>   * NOTE: This method can be removed completely when
>   * {@link #TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS} is removed 
>   */
>  @Override
>  protected void init(IndexSchema schema, Map args) {
>super.init(schema, args);
>if (TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS) {
>  args.remove("precisionStep");
>}
>  }
> {code}
> Then in SolrTestCaseJ4, set 
> {{PointField.TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS}} on a class by class 
> basis when randomizing Trie/Points (and unset \@AfterClass).
> (details to follow in comment)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (32bit/jdk-9-ea+173) - Build # 6691 - Still Unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6691/
Java: 32bit/jdk-9-ea+173 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ActionThrottleTest.testBasics

Error Message:
989ms

Stack Trace:
java.lang.AssertionError: 989ms
at 
__randomizedtesting.SeedInfo.seed([724D123FB68E10C5:4F95BC138E604EB5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.ActionThrottleTest.testBasics(ActionThrottleTest.java:63)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.V2ApiIntegrationTest.testCollectionsApi

Error Message:
Error from server at http://127.0.0.1:62608/solr: 
java.nio.file.InvalidPathException: Illegal char <�> at index 53: 
C:UsersjenkinsworkspaceLucene-Solr-master-Windowssolr�uildsolr-core estJ0 
empsolr.handler.V2ApiIntegrationTest_724D123FB68E10C5-001 empDir-002

Stack Trace:

[jira] [Commented] (SOLR-10864) Add static (test only) boolean to PointField indicating 'precisionStep' should be ignored so we can simplify points randomization in our schemas

2017-06-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069442#comment-16069442
 ] 

Hoss Man commented on SOLR-10864:
-

bq. ... I think these changes left TestRandomDVFaceting in a bad state. ...

Hmm, ok -- yeah, i see what you mean.

bq. For consistency with other schemas, maybe we want 
schema-docValuesFaceting.xml to have specific trie fields and use those in this 
test. 

I'm not sure what you mean by "consistency with other schemas", but in general 
my concern is that {{schema-docValuesFaceting.xml}} is already used by (one) 
other tests -- and there's nothing to stop people from adding new tests that 
use this schema as well -- so I think it would make more sense to change 
TestRandomDVFaceting.beforeTests to _force_  non-points and non-dv via sys prop 
overrides, so that it can do the comparisons as originally intended (and 
compare the trie to points you added in 57934ba4)

make sense?

> Add static (test only) boolean to PointField indicating 'precisionStep' 
> should be ignored so we can simplify points randomization in our schemas
> 
>
> Key: SOLR-10864
> URL: https://issues.apache.org/jira/browse/SOLR-10864
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0)
>
> Attachments: SOLR-10864.patch, SOLR-10864.patch, SOLR-10864.patch, 
> SOLR-10864.patch, SOLR-10864.patch, SOLR-10864.patch
>
>
> (I'm spinning this idea out of parent jira SOLR-10807 so that it gets it's 
> own jira# w/ it's own summary for increased visibility/comments)
> In the interest of making it easier & more straight forward to get good 
> randomized test coverage of Points fields, I'd like to add the following to 
> the {{PointField}} class...
> {code}
>  /**
>   * 
>   * The Test framework can set this global variable to instruct PointField 
> that
>   * (on init) it should be tollerant of the precisionStep 
> argument used by TrieFields.
>   * This allows for simple randomization of TrieFields and PointFields w/o 
> extensive duplication
>   * of fieldType/ declarations.
>   * 
>   *
>   * NOTE: When {@link TrieField} is removed, this boolean must also be 
> removed
>   *
>   * @lucene.internal
>   * @lucene.experimental
>   */
>  public static boolean TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS = false;
>  /** 
>   * NOTE: This method can be removed completely when
>   * {@link #TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS} is removed 
>   */
>  @Override
>  protected void init(IndexSchema schema, Map args) {
>super.init(schema, args);
>if (TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS) {
>  args.remove("precisionStep");
>}
>  }
> {code}
> Then in SolrTestCaseJ4, set 
> {{PointField.TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS}} on a class by class 
> basis when randomizing Trie/Points (and unset \@AfterClass).
> (details to follow in comment)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10910) Clean up a few details left over from pluggable transient core and untangling CoreDescriptor/CoreContainer references

2017-06-29 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-10910.
---
   Resolution: Fixed
Fix Version/s: 6.7
   master (7.0)

> Clean up a few details left over from pluggable transient core and untangling 
> CoreDescriptor/CoreContainer references
> -
>
> Key: SOLR-10910
> URL: https://issues.apache.org/jira/browse/SOLR-10910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: master (7.0), 6.7
>
> Attachments: SOLR-10910.patch, SOLR-10910.patch
>
>
> There are a few bits of the code from SOLR-10007, SOLR-8906 that could stand 
> some cleanup. For instance, the TransientSolrCoreCache is rather awkwardly 
> hanging around in CoreContainer and would fit more naturally in SolrCores.
> What I've seen so far shouldn't result in incorrect behavior, just cleaning 
> up for the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10982) Deprecate FieldCache

2017-06-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10982:
-
Affects Version/s: master (7.0)
  Environment: 



  was:





> Deprecate FieldCache
> 
>
> Key: SOLR-10982
> URL: https://issues.apache.org/jira/browse/SOLR-10982
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
> Environment: 
>Reporter: Tomás Fernández Löbbe
>
> Extracting this idea suggested by [~thetaphi] in SOLR-10803. The proposal is 
> to:
> # Enable DocValues by default for numeric/string/date fields. (SOLR-10808)
> # Have a merge policy that can generate the DocValues at merge time if a 
> field doesn’t have them but should according to the schema (SOLR-10046). Make 
> this Merge Policy the default.
> # When using an index created with 7.x (maybe using the new metadata added by 
> [~jpountz] recently) and something tries to access FieldCache (e.g. for 
> sorting or faceting or functions), it should fail the query. 
> # Remove FieldCache in 8.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10982) Deprecate FieldCache

2017-06-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069434#comment-16069434
 ] 

Tomás Fernández Löbbe commented on SOLR-10982:
--

I don't think I'll have time to work on this now, but I didn't want the 
proposal to be lost in Jira comments. I guess depending on timing the major 
version where this happen can change.

> Deprecate FieldCache
> 
>
> Key: SOLR-10982
> URL: https://issues.apache.org/jira/browse/SOLR-10982
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
> Environment: 
>Reporter: Tomás Fernández Löbbe
>
> Extracting this idea suggested by [~thetaphi] in SOLR-10803. The proposal is 
> to:
> # Enable DocValues by default for numeric/string/date fields. (SOLR-10808)
> # Have a merge policy that can generate the DocValues at merge time if a 
> field doesn’t have them but should according to the schema (SOLR-10046). Make 
> this Merge Policy the default.
> # When using an index created with 7.x (maybe using the new metadata added by 
> [~jpountz] recently) and something tries to access FieldCache (e.g. for 
> sorting or faceting or functions), it should fail the query. 
> # Remove FieldCache in 8.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069435#comment-16069435
 ] 

Tomás Fernández Löbbe commented on SOLR-10803:
--

bq. Maybe also similar stuff to prevent FieldCache usage?
Created SOLR-10982. Feel free to comment there.

> Solr should refuse to create Trie*Field instances in 7.0 indices
> 
>
> Key: SOLR-10803
> URL: https://issues.apache.org/jira/browse/SOLR-10803
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
>
> If we want to be able to remove support for legacy numerics from Solr in 8.0, 
> we need to forbid the use of Trie*Field in indices that are created on or 
> after 7.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10982) Deprecate FieldCache

2017-06-29 Thread JIRA
Tomás Fernández Löbbe created SOLR-10982:


 Summary: Deprecate FieldCache
 Key: SOLR-10982
 URL: https://issues.apache.org/jira/browse/SOLR-10982
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: 


Reporter: Tomás Fernández Löbbe


Extracting this idea suggested by [~thetaphi] in SOLR-10803. The proposal is to:
# Enable DocValues by default for numeric/string/date fields. (SOLR-10808)
# Have a merge policy that can generate the DocValues at merge time if a field 
doesn’t have them but should according to the schema (SOLR-10046). Make this 
Merge Policy the default.
# When using an index created with 7.x (maybe using the new metadata added by 
[~jpountz] recently) and something tries to access FieldCache (e.g. for sorting 
or faceting or functions), it should fail the query. 
# Remove FieldCache in 8.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10574) Choose a default configset for Solr 7

2017-06-29 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-10574.
-
Resolution: Fixed

Documentation for this feature remains, which I'll complete as part of 
SOLR-10272. Thanks everyone for feedback and help!

If there's something that remains to be done, but doesn't have a child JIRA 
here, please create a sub-task and add here.

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10574.patch, SOLR-10574.patch, SOLR-10574.patch, 
> SOLR-10574.patch
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Remove data_driven_schema_configs and basic_configs
> # Introduce a combined configset, {{_default}} based on the above two 
> configsets.
> # Build a "toggleable" data driven functionality into {{_default}}
> Usage:
> # Create a collection (using _default configset)
> # Data driven / schemaless functionality is enabled by default; so just start 
> indexing your documents.
> # If don't want data driven / schemaless, disable this behaviour: {code}
> curl http://host:8983/solr/coll1/config -d '{"set-user-property": 
> {"update.autoCreateFields":"false"}}'
> {code}
> # Create schema fields using schema API, and index documents



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7892) LatLonDocValuesField methods should be clearly marked as slow

2017-06-29 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-7892:
---

 Summary: LatLonDocValuesField methods should be clearly marked as 
slow
 Key: LUCENE-7892
 URL: https://issues.apache.org/jira/browse/LUCENE-7892
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


It is very trappy that LatLonDocValuesField has stuff like 
newBoxQuery/newDistanceQuery.

Users bring this up on the user list and are confused as to why the resulting 
queries are slow.

Here, we hurt the typical use case, to try to slightly speed up an esoteric one 
(sparse stuff). Its a terrible tradeoff for the API.

If we truly must have such slow methods in the public API, then they should 
have {{slow}} in their name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release planning for 7.0

2017-06-29 Thread Ishan Chattopadhyaya
Hi Anshum,
I'd like to have SOLR-10282 in for 7.0. It is a low impact new feature that
helps admins to enable Kerberos more easily using the bin/solr script.
I should be able to have it dev-complete by end of Friday. Let me know if
you have any objections.
Thanks,
Ishan

On Thu, Jun 29, 2017 at 1:00 AM, Anshum Gupta 
wrote:

> Hi Christine,
>
> With my current progress, which is much slower that how I'd have liked it
> to be, I think there is still a day before the branches are cut. How far
> out do you think you are with this?
>
> -Anshum
>
>
> On Wed, Jun 28, 2017 at 9:59 AM Uwe Schindler  wrote:
>
>> Hi Anshum,
>>
>>
>>
>> I have a häckidihickhäck workaround for the Hadoop Solr 9 issue. It is
>> already committed to master and 6.x branch, so the issue is fixed:
>> https://issues.apache.org/jira/browse/SOLR-10966
>>
>>
>>
>> I lowered the Hadoop-Update (https://issues.apache.org/
>> jira/browse/SOLR-10951) issue to “Major” level, so it is no longer
>> blocker.
>>
>>
>>
>> Nevertheless, we should fix the startup scripts for Java 9 in master
>> before release of Solr 7, because currently the shell scripts fail (on
>> certain platforms). And Java 9 is coming soon, so we should really have
>> support because the speed improvements are a main reason to move to Java 9
>> with your Solr servers.
>>
>>
>>
>> Uwe
>>
>>
>>
>> -
>>
>> Uwe Schindler
>>
>> Achterdiek 19, D-28357 Bremen
>>
>> http://www.thetaphi.de
>>
>> eMail: u...@thetaphi.de
>>
>>
>>
>> *From:* Anshum Gupta [mailto:ans...@anshumgupta.net]
>> *Sent:* Sunday, June 25, 2017 7:52 PM
>>
>>
>> *To:* dev@lucene.apache.org
>> *Subject:* Re: Release planning for 7.0
>>
>>
>>
>> Hi Uwe,
>>
>>
>>
>> +1 on getting SOLR-10951
>>  in before the release
>> but I assume you weren't hinting at holding back the branch creation :).
>>
>>
>>
>> I am not well versed with that stuff so it would certainly be optimal for
>> someone else to look at that.
>>
>>
>>
>> -Anshum
>>
>> On Sun, Jun 25, 2017 at 9:58 AM Uwe Schindler  wrote:
>>
>> Hi,
>>
>>
>>
>> currently we have the following problem:
>>
>>- The first Java 9 release candidate came out. This one now uses the
>>final version format. The string returned by ${java.version} is now plain
>>simple “9” – bummer for one single 3rd party library!
>>- This breaks one of the most basic Hadoop classes, so anything in
>>Solr that refers somehow to Hadoop breaks. Of course this is HDFS - but
>>also authentication! We should support Java 9, so we should really fix 
>> this
>>ASAP!
>>
>>
>>
>> From now on all tests running with Java 9 fail on Jenkins until we fix
>> the following:
>>
>>- Get an Update from Hadoop Guys (2.7.4), with just the stupid check
>>removed (the completely useless version checking code snipped already 
>> makes
>>its round through twitter): https://issues.apache.org/
>>jira/browse/HADOOP-14586
>>- Or we update at least master/7.0 to latest Hadoop version, which
>>has the bug already fixed. Unfortunately this does not work, as there is a
>>bug in the Hadoop MiniDFSCluster that hangs on test shutdown. I have no
>>idea how to fix. See https://issues.apache.org/jira/browse/SOLR-10951
>>
>>
>>
>> I’d prefer to fix https://issues.apache.org/jira/browse/SOLR-10951 for
>> master before release, so I set it as blocker. I am hoping for hely by Mark
>> Miller. If the hadoop people have a simple bugfix release for the earlier
>> version, we may also be able to fix branch_6x and branch_6_6 (but I
>> disabled them on Jenkins anyways).
>>
>>
>>
>> Uwe
>>
>>
>>
>> -
>>
>> Uwe Schindler
>>
>> Achterdiek 19, D-28357 Bremen
>>
>> http://www.thetaphi.de
>>
>> eMail: u...@thetaphi.de
>>
>>
>>
>> *From:* Anshum Gupta [mailto:ans...@anshumgupta.net]
>> *Sent:* Saturday, June 24, 2017 10:52 PM
>>
>>
>> *To:* dev@lucene.apache.org
>> *Subject:* Re: Release planning for 7.0
>>
>>
>>
>> I'll create the 7x, and 7.0 branches *tomorrow*.
>>
>>
>>
>> Ishan, do you mean you would be able to close it by Tuesday? You would
>> have to commit to both 7.0, and 7.x, in addition to master, but I think
>> that should be ok.
>>
>>
>>
>> We also have SOLR-10803 open at this moment and we'd need to come to a
>> decision on that as well in order to move forward with 7.0.
>>
>>
>>
>> P.S: If there are any objections to this plan, kindly let me know.
>>
>>
>>
>> -Anshum
>>
>>
>>
>> On Fri, Jun 23, 2017 at 5:03 AM Ishan Chattopadhyaya <
>> ichattopadhy...@gmail.com> wrote:
>>
>> Hi Anshum,
>>
>>
>>
>> > I will send out an email a day before cutting the branch, as well as
>> once the branch is in place.
>>
>> I'm right now on travel, and unable to finish SOLR-10574 until Monday
>> (possibly Tuesday).
>>
>> Regards,
>>
>> Ishan
>>
>>
>>
>> On Tue, Jun 20, 2017 at 5:08 PM, Anshum Gupta 
>> wrote:
>>
>> From my understanding, there's not really a 'plan' but 

[JENKINS] Lucene-Solr-Tests-6.x - Build # 984 - Unstable

2017-06-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/984/

1 tests failed.
FAILED:  
org.apache.solr.handler.component.DistributedSpellCheckComponentTest.test

Error Message:
Error from server at http://127.0.0.1:57388//collection1: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:55033//collection1, 
http://[ff01::114]:2, http://[ff01::083]:2, http://[ff01::213]:2]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:57388//collection1: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:55033//collection1, 
http://[ff01::114]:2, http://[ff01::083]:2, http://[ff01::213]:2]
at 
__randomizedtesting.SeedInfo.seed([EF61B14FABB8E8D7:67358E950544852F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:594)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:261)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:564)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:612)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:594)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:573)
at 
org.apache.solr.handler.component.DistributedSpellCheckComponentTest.test(DistributedSpellCheckComponentTest.java:151)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1018)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-10282) bin/solr support for enabling Kerberos security

2017-06-29 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10282:

Attachment: SOLR-10282.patch

Here's a patch for enabling kerberos support using bin/solr script. Here's the 
expected usage:

{code}
$ bin/solr auth enable -type kerberos -config 
"-Djava.security.auth.login.config=/home/foo/jaas-client.conf 
-Dsolr.kerberos.cookie.domain=192.168.0.107 
-Dsolr.kerberos.cookie.portaware=true 
-Dsolr.kerberos.principal=HTTP/192.168.0@example.com 
-Dsolr.kerberos.keytab=/keytabs/107.keytab"
{code}

This will upload a security.json to ZK that sets up KerberosPlugin, and adds 
the "config" parameter to the solr.in.sh. The user would need to restart the 
node after performing this step.

Going forward, we can make this script support interactive by accepting (and 
guiding/suggesting) the various configuration parameters, like a wizard. 
Possibly, even helping the user in writing the JAAS configs.

There are some nocommits that need to be resolved before committing, but this 
is close and it seems to work functionally.

> bin/solr support for enabling Kerberos security
> ---
>
> Key: SOLR-10282
> URL: https://issues.apache.org/jira/browse/SOLR-10282
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0)
>
> Attachments: SOLR-10282.patch
>
>
> This is in the same spirit as SOLR-8440.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10282) bin/solr support for enabling Kerberos security

2017-06-29 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-10282:
---

Assignee: Ishan Chattopadhyaya

> bin/solr support for enabling Kerberos security
> ---
>
> Key: SOLR-10282
> URL: https://issues.apache.org/jira/browse/SOLR-10282
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0)
>
>
> This is in the same spirit as SOLR-8440.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10282) bin/solr support for enabling Kerberos security

2017-06-29 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10282:

Fix Version/s: master (7.0)

> bin/solr support for enabling Kerberos security
> ---
>
> Key: SOLR-10282
> URL: https://issues.apache.org/jira/browse/SOLR-10282
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0)
>
>
> This is in the same spirit as SOLR-8440.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10864) Add static (test only) boolean to PointField indicating 'precisionStep' should be ignored so we can simplify points randomization in our schemas

2017-06-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069403#comment-16069403
 ] 

Tomás Fernández Löbbe commented on SOLR-10864:
--

Sorry for the late response, I've been taking some time off. One minor comment, 
I think these changes left {{TestRandomDVFaceting}} in a bad state. It was 
originally intended to compare faceting between dv=true and dv=false. It would 
also compare with point fields most of the times (see 
https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/TestRandomDVFaceting.java#L250-L258
 ). After this change, half the time (when {{randomizeNumericTypesProperties}} 
decides to use PointFields) the test just compares dv=true three times. 
For consistency with other schemas, maybe we want 
{{schema-docValuesFaceting.xml}} to have specific trie fields and use those in 
this test. 

> Add static (test only) boolean to PointField indicating 'precisionStep' 
> should be ignored so we can simplify points randomization in our schemas
> 
>
> Key: SOLR-10864
> URL: https://issues.apache.org/jira/browse/SOLR-10864
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0)
>
> Attachments: SOLR-10864.patch, SOLR-10864.patch, SOLR-10864.patch, 
> SOLR-10864.patch, SOLR-10864.patch, SOLR-10864.patch
>
>
> (I'm spinning this idea out of parent jira SOLR-10807 so that it gets it's 
> own jira# w/ it's own summary for increased visibility/comments)
> In the interest of making it easier & more straight forward to get good 
> randomized test coverage of Points fields, I'd like to add the following to 
> the {{PointField}} class...
> {code}
>  /**
>   * 
>   * The Test framework can set this global variable to instruct PointField 
> that
>   * (on init) it should be tollerant of the precisionStep 
> argument used by TrieFields.
>   * This allows for simple randomization of TrieFields and PointFields w/o 
> extensive duplication
>   * of fieldType/ declarations.
>   * 
>   *
>   * NOTE: When {@link TrieField} is removed, this boolean must also be 
> removed
>   *
>   * @lucene.internal
>   * @lucene.experimental
>   */
>  public static boolean TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS = false;
>  /** 
>   * NOTE: This method can be removed completely when
>   * {@link #TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS} is removed 
>   */
>  @Override
>  protected void init(IndexSchema schema, Map args) {
>super.init(schema, args);
>if (TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS) {
>  args.remove("precisionStep");
>}
>  }
> {code}
> Then in SolrTestCaseJ4, set 
> {{PointField.TEST_HACK_IGNORE_USELESS_TRIEFIELD_ARGS}} on a class by class 
> basis when randomizing Trie/Points (and unset \@AfterClass).
> (details to follow in comment)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10397) Port 'autoAddReplicas' feature to the policy rules framework and make it work with non-shared filesystems

2017-06-29 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069392#comment-16069392
 ] 

Cao Manh Dat commented on SOLR-10397:
-

[~shalinmangar] 
1. Yeah, that's a good notice, I will fix that soon.
2. We can not do the integration test if we do not have ExecutePlanAction, and 
{{AutoAddReplicasPlanActionTest.testSimple}} is a test for 
AutoAddReplicasPlanAction only so I removed created trigger to ensure that in 
the future we do not have to touch that test.


> Port 'autoAddReplicas' feature to the policy rules framework and make it work 
> with non-shared filesystems
> -
>
> Key: SOLR-10397
> URL: https://issues.apache.org/jira/browse/SOLR-10397
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10397.1.patch, SOLR-10397.patch
>
>
> Currently 'autoAddReplicas=true' can be specified in the Collection Create 
> API to automatically add replicas when a replica becomes unavailable. I 
> propose to move this feature to the autoscaling cluster policy rules design.
> This will include the following:
> * Trigger support for ‘nodeLost’ event type
> * Modification of existing implementation of ‘autoAddReplicas’ to 
> automatically create the appropriate ‘nodeLost’ trigger.
> * Any such auto-created trigger must be marked internally such that setting 
> ‘autoAddReplicas=false’ via the Modify Collection API should delete or 
> disable corresponding trigger.
> * Support for non-HDFS filesystems while retaining the optimization afforded 
> by HDFS i.e. the replaced replica can point to the existing data dir of the 
> old replica.
> * Deprecate/remove the feature of enabling/disabling ‘autoAddReplicas’ across 
> the entire cluster using cluster properties in favor of using the 
> suspend-trigger/resume-trigger APIs.
> This will retain backward compatibility for the most part and keep a common 
> use-case easy to enable as well as make it available to more people (i.e. 
> people who don't use HDFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Fix Version/s: 4.10.4
   master (7.0)

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 4.10.4, 6.6, master (7.0)
>
> Attachments: SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Removing Solr deprecations for 7.0

2017-06-29 Thread Varun Thacker
Here's another one I found in web.xml




  RedirectOldAdminUI
  org.apache.solr.servlet.RedirectServlet
  
destination
${context}/#/
  



On Tue, Jun 27, 2017 at 8:11 AM, Erick Erickson 
wrote:

> Hmmm, I probably put the one in SolrResourceLoader.java:108 but it
> sure looks bogus. The name "coreProperties" fooled me into thinking it
> was part of the solr.xml  tag processing but apparently not.
> I'll remove the comment as part of the current patch I'm working on.
>
> Thanks for finding!
>
> Erick
>
> On Tue, Jun 27, 2017 at 1:54 AM, Jan Høydahl 
> wrote:
> > A quick grep for “Solr 7” and “remove in" gives
> >
> > ./core/src/java/org/apache/solr/core/SolrResourceLoader.java:108:
> //TODO: Solr5. Remove this completely when you obsolete putting  tags
> in solr.xml (See Solr-4196)
> > ./core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java:464:
> //TODO should we not remove in the next release ?
> > ./core/src/java/org/apache/solr/core/NodeConfig.java:309:// Remove
> in Solr 7.0
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:105:   *
> @deprecated Use the ctor that also takes printerNewline.  This ctor will be
> removed in Solr 7.
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:117:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:122:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:127:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:133:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:138:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:145:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:152:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:159:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:164:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:173:  /**
> @deprecated will be removed in Solr 7 */
> > ./core/src/java/org/apache/solr/internal/csv/CSVStrategy.java:186: *
> @deprecated will be removed in Solr 7
> >
> >
> > Anyone remember writing those comments? Now is the time to nuke that
> dead code :)
> >
> > --
> > Jan Høydahl, search solution architect
> > Cominvent AS - www.cominvent.com
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 976 - Still unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/976/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:51059","node_name":"127.0.0.1:51059_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:51092;,   "node_name":"127.0.0.1:51092_",  
 "state":"down"}, "core_node2":{   "state":"down",  
 "base_url":"http://127.0.0.1:51066;,   
"core":"c8n_1x3_lf_shard1_replica2",   "node_name":"127.0.0.1:51066_"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:51059;,   "node_name":"127.0.0.1:51059_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:51059","node_name":"127.0.0.1:51059_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:51092;,
  "node_name":"127.0.0.1:51092_",
  "state":"down"},
"core_node2":{
  "state":"down",
  "base_url":"http://127.0.0.1:51066;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:51066_"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:51059;,
  "node_name":"127.0.0.1:51059_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3AD81D6A6EF9127C:B28C22B0C0057F84]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[jira] [Updated] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Attachment: (was: SOLR-10981.patch)

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 6.6
>
> Attachments: SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Attachment: SOLR-10981.patch

correctly named patch generated from the command line rather than ij.

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 6.6
>
> Attachments: SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Comment: was deleted

(was: Correctly named patch file.)

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 6.6
>
> Attachments: SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+175) - Build # 3856 - Unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3856/
Java: 64bit/jdk-9-ea+175 -XX:+UseCompressedOops -XX:+UseSerialGC 
--illegal-access=deny

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CdcrBootstrapTest

Error Message:
ObjectTracker found 3 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:91)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:728)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:923)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:920)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:855)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:388)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:363)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:307)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:136)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:202)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
 at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:427)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:301) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:400) 
 at 
org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:757)
  at 

[jira] [Updated] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Attachment: SOLR-10981.patch

Correctly named patch file.

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 6.6
>
> Attachments: SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Comment: was deleted

(was: Patch for code and tests.)

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 6.6
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Attachment: (was: 
Added_gzip_support_for_update_files__Updated_test_code_to_use_try_with_resources_.patch)

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 6.6
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7x, and 7.0 branches

2017-06-29 Thread Uwe Schindler
The problem is that old 7.x indexes still use some codecs named by version 6. 
They were never updated!

So backwards codec must keep all stuff in metainf and as classes that the 7.0 
original index format requires. Maybe create a dummy 7.0 index in branch-7x to 
have a list of codecs to test.

Uwe

Am 30. Juni 2017 00:43:06 MESZ schrieb Anshum Gupta :
>I’ve pushed more changes there, and we have a new set of errors. This
>is one of them:
>
>[junit4]   2> NOTE: reproduce with: ant test 
>-Dtestcase=TestBackwardsCompatibility
>-Dtests.method=testUnsupportedOldIndexes -Dtests.seed=8FDA7D3598A2FB46
>-Dtests.slow=true -Dtests.locale=ar-LB
>-Dtests.timezone=America/Indiana/Marengo -Dtests.asserts=true
>-Dtests.file.encoding=UTF-8
>[junit4] ERROR   3.07s |
>TestBackwardsCompatibility.testUnsupportedOldIndexes <<<
>[junit4]> Throwable #1: java.lang.IllegalArgumentException: Could
>not load codec 'Lucene60'.  Did you forget to add
>lucene-backward-codecs.jar?
>[junit4]>  at
>__randomizedtesting.SeedInfo.seed([8FDA7D3598A2FB46:74214F1628395C1A]:0)
>[junit4]>  at
>org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:433)
>[junit4]>  at
>org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:360)
>[junit4]>  at
>org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:295)
>[junit4]>  at
>org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:59)
>[junit4]>  at
>org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:56)
>[junit4]>  at
>org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
>[junit4]>  at
>org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:79)
>[junit4]>  at
>org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
>[junit4]>  at
>org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:613)
>   [junit4]>   at java.lang.Thread.run(Thread.java:748)
>[junit4]> Caused by: java.lang.IllegalArgumentException: An SPI
>class of type org.apache.lucene.codecs.Codec with name 'Lucene60' does
>not exist.  You need to add the corresponding JAR file supporting this
>SPI to your classpath.  The current classpath supports the following
>names: [Asserting, CheapBastard, FastCompressingStoredFields,
>FastDecompressionCompressingStoredFields,
>HighCompressionCompressingStoredFields, DummyCompressingStoredFields,
>SimpleText, Lucene70]
>
>
>Do you intend to Ignore this for now? Also, in the last commit, I’ve
>Ignored a bunch of tests that use the old indexes.
>
>
>-Anshum
>
>
>
>> On Jun 29, 2017, at 3:10 PM, Anshum Gupta  wrote:
>> 
>> I did remove the declaration in META-INT/services, atleast everything
>that had a version in it’s name i.e. 5x, or 6x.
>> 
>> I’ve also renamed the indexes for 6x, but here are a few that I
>wasn’t sure about what to do with these:
>> sorted.6.3.0.zip
>> sorted.6.2.1.zip
>> sorted.6.2.0.zip
>> moreterms.6.0.0.zip
>> maxposindex.zip
>> manypointsindex.zip
>> empty.6.0.0.zip
>> dvupdates.6.0.0.zip
>> 
>> Considering you suggested disabling the tests, should we be removing
>these indexes and regenerating these post release when re re-enable
>tests or should we keep them here and just disable the tests?
>> 
>> I’ve reverted the changes in SegmentInfos.java, and also changed
>testIllegalCreatedVersion as per your suggestion.
>> 
>> I’m running the tests now, and will commit to my fork right after. 
>> 
>> Thanks for helping out with this.
>> 
>> -Anshum
>> 
>> 
>> 
>>> On Jun 29, 2017, at 2:33 PM, Adrien Grand > wrote:
>>> 
>>> Removing most backward codecs sounds good to me since the only codec
>that 8.0 needs to be able to read so far is the 7.0 codec which is in
>core. It looks like you removed the code, but you also need to remove
>their declaration in META-INF/services or the SPI loaded will try to
>load them and fail since it cannot find the class.
>>> 
>>> Backcompat indexes will be added as we perform 7.x releases. However
>you'd need to rename the 6.x indices from index.6.x.x to
>unsupported.6.x.x.
>>> 
>>> We have some specific tests like "moreterms" and "dvupdates". I
>think we need to disable them for now and make sure to reenable them
>once 7.0 is released.
>>> 
>>> I think the changes you did in SegmentInfos.java
>
>are not necessary. It looks like the version numbers are related to the
>current major, but it is actually due to the fact that 7.0 is the first
>version to record the version that was used at creation time. I think
>you can revert changes in this file entirely. In the
>testIllegalCreatedVersion test, I'd just replace 8 with 9 or
>Version.CURRENT.major + 1.
>>> 
>>> We'd need to remove the 

[jira] [Updated] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Attachment: 
Added_gzip_support_for_update_files__Updated_test_code_to_use_try_with_resources_.patch

Patch for code and tests.

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 6.6
>
> Attachments: 
> Added_gzip_support_for_update_files__Updated_test_code_to_use_try_with_resources_.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10981) Allow update to load gzip files

2017-06-29 Thread Andrew Lundgren (JIRA)
Andrew Lundgren created SOLR-10981:
--

 Summary: Allow update to load gzip files 
 Key: SOLR-10981
 URL: https://issues.apache.org/jira/browse/SOLR-10981
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 6.6
Reporter: Andrew Lundgren
 Fix For: 6.6


We currently import large CSV files.  We store them in gzip files as they 
compress at around 80%.

To import them we must gunzip them and then import them.  After that we no 
longer need the decompressed files.

This patch allows directly opening either URL, or local files that are gzipped.

For URLs, to determine if the file is gzipped, it will check the content 
encoding=="gzip" or if the file ends in ".gz"

For files, if the file ends in ".gz" then it will assume the file is gzipped.

I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10980) SynonymGraphFilterFactory proximity search error

2017-06-29 Thread JIRA
Diogo Guilherme Leão Edelmuth created SOLR-10980:


 Summary: SynonymGraphFilterFactory proximity search error
 Key: SOLR-10980
 URL: https://issues.apache.org/jira/browse/SOLR-10980
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6
Reporter: Diogo Guilherme Leão Edelmuth


There seems to be an issue when doing proximity searches that include terms 
that have multi-word synonyms.

Example:
consider there's is configured in synonyms.txt
(
grand mother, grandmother
grandfather, granddad
)
and there's an indexed field with: (My mother and my grandmother went...)

Proximity search with: ("mother grandmother"~8)
won't return the file, while ("father grandfather"~8) does return the analogous 
file.

I am not a developer of Solr, so pardon if I am wrong, but I ran it with 
debug=query and saw that when proximity searches are done with multi-term 
synonyms, the called function is spanNearQuery: 
"parsedquery":"SpanNearQuery(spanNear([laudo:mother,
spanOr([laudo:grand mother, laudo:grandmother])],*0*, true))"

while proximity searches with one-term synonyms are executed with:
"MultiPhraseQuery(laudo:\"father (grandfather granddad)\"~10)"

Note that the SpanNearQuery is called with a slope parameter of 0, no matter 
what is passed after the tilde. So if I search the exact phrase it does match.


Here is my field-type, just in case:






















--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10979) Randomize PointFields in schema-docValues\*.xml and all affected tests

2017-06-29 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-10979.
-
Resolution: Fixed
  Assignee: Hoss Man

> Randomize PointFields in schema-docValues\*.xml and all affected tests
> --
>
> Key: SOLR-10979
> URL: https://issues.apache.org/jira/browse/SOLR-10979
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7x, and 7.0 branches

2017-06-29 Thread Anshum Gupta
I’ve pushed more changes there, and we have a new set of errors. This is one of 
them:

   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestBackwardsCompatibility -Dtests.method=testUnsupportedOldIndexes 
-Dtests.seed=8FDA7D3598A2FB46 -Dtests.slow=true -Dtests.locale=ar-LB 
-Dtests.timezone=America/Indiana/Marengo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   3.07s | 
TestBackwardsCompatibility.testUnsupportedOldIndexes <<<
   [junit4]> Throwable #1: java.lang.IllegalArgumentException: Could not 
load codec 'Lucene60'.  Did you forget to add lucene-backward-codecs.jar?
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([8FDA7D3598A2FB46:74214F1628395C1A]:0)
   [junit4]>at 
org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:433)
   [junit4]>at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:360)
   [junit4]>at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:295)
   [junit4]>at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:59)
   [junit4]>at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:56)
   [junit4]>at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
   [junit4]>at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:79)
   [junit4]>at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
   [junit4]>at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:613)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]> Caused by: java.lang.IllegalArgumentException: An SPI class of 
type org.apache.lucene.codecs.Codec with name 'Lucene60' does not exist.  You 
need to add the corresponding JAR file supporting this SPI to your classpath.  
The current classpath supports the following names: [Asserting, CheapBastard, 
FastCompressingStoredFields, FastDecompressionCompressingStoredFields, 
HighCompressionCompressingStoredFields, DummyCompressingStoredFields, 
SimpleText, Lucene70]


Do you intend to Ignore this for now? Also, in the last commit, I’ve Ignored a 
bunch of tests that use the old indexes.


-Anshum



> On Jun 29, 2017, at 3:10 PM, Anshum Gupta  wrote:
> 
> I did remove the declaration in META-INT/services, atleast everything that 
> had a version in it’s name i.e. 5x, or 6x.
> 
> I’ve also renamed the indexes for 6x, but here are a few that I wasn’t sure 
> about what to do with these:
> sorted.6.3.0.zip
> sorted.6.2.1.zip
> sorted.6.2.0.zip
> moreterms.6.0.0.zip
> maxposindex.zip
> manypointsindex.zip
> empty.6.0.0.zip
> dvupdates.6.0.0.zip
> 
> Considering you suggested disabling the tests, should we be removing these 
> indexes and regenerating these post release when re re-enable tests or should 
> we keep them here and just disable the tests?
> 
> I’ve reverted the changes in SegmentInfos.java, and also changed 
> testIllegalCreatedVersion as per your suggestion.
> 
> I’m running the tests now, and will commit to my fork right after. 
> 
> Thanks for helping out with this.
> 
> -Anshum
> 
> 
> 
>> On Jun 29, 2017, at 2:33 PM, Adrien Grand > > wrote:
>> 
>> Removing most backward codecs sounds good to me since the only codec that 
>> 8.0 needs to be able to read so far is the 7.0 codec which is in core. It 
>> looks like you removed the code, but you also need to remove their 
>> declaration in META-INF/services or the SPI loaded will try to load them and 
>> fail since it cannot find the class.
>> 
>> Backcompat indexes will be added as we perform 7.x releases. However you'd 
>> need to rename the 6.x indices from index.6.x.x to unsupported.6.x.x.
>> 
>> We have some specific tests like "moreterms" and "dvupdates". I think we 
>> need to disable them for now and make sure to reenable them once 7.0 is 
>> released.
>> 
>> I think the changes you did in SegmentInfos.java 
>> 
>>  are not necessary. It looks like the version numbers are related to the 
>> current major, but it is actually due to the fact that 7.0 is the first 
>> version to record the version that was used at creation time. I think you 
>> can revert changes in this file entirely. In the testIllegalCreatedVersion 
>> test, I'd just replace 8 with 9 or Version.CURRENT.major + 1.
>> 
>> We'd need to remove the compatibility layer in similarities but it can be 
>> done as a follow-up.
>> 
>> Thanks for taking care of this!
>> 
>> Le jeu. 29 juin 2017 à 23:12, Anshum Gupta > > a écrit :
>> Adrien, I’ve pushed some more changes and seems like I’d have 

Re: 7x, and 7.0 branches

2017-06-29 Thread Anshum Gupta
I did remove the declaration in META-INT/services, atleast everything that had 
a version in it’s name i.e. 5x, or 6x.

I’ve also renamed the indexes for 6x, but here are a few that I wasn’t sure 
about what to do with these:
sorted.6.3.0.zip
sorted.6.2.1.zip
sorted.6.2.0.zip
moreterms.6.0.0.zip
maxposindex.zip
manypointsindex.zip
empty.6.0.0.zip
dvupdates.6.0.0.zip

Considering you suggested disabling the tests, should we be removing these 
indexes and regenerating these post release when re re-enable tests or should 
we keep them here and just disable the tests?

I’ve reverted the changes in SegmentInfos.java, and also changed 
testIllegalCreatedVersion as per your suggestion.

I’m running the tests now, and will commit to my fork right after. 

Thanks for helping out with this.

-Anshum



> On Jun 29, 2017, at 2:33 PM, Adrien Grand  wrote:
> 
> Removing most backward codecs sounds good to me since the only codec that 8.0 
> needs to be able to read so far is the 7.0 codec which is in core. It looks 
> like you removed the code, but you also need to remove their declaration in 
> META-INF/services or the SPI loaded will try to load them and fail since it 
> cannot find the class.
> 
> Backcompat indexes will be added as we perform 7.x releases. However you'd 
> need to rename the 6.x indices from index.6.x.x to unsupported.6.x.x.
> 
> We have some specific tests like "moreterms" and "dvupdates". I think we need 
> to disable them for now and make sure to reenable them once 7.0 is released.
> 
> I think the changes you did in SegmentInfos.java 
> 
>  are not necessary. It looks like the version numbers are related to the 
> current major, but it is actually due to the fact that 7.0 is the first 
> version to record the version that was used at creation time. I think you can 
> revert changes in this file entirely. In the testIllegalCreatedVersion test, 
> I'd just replace 8 with 9 or Version.CURRENT.major + 1.
> 
> We'd need to remove the compatibility layer in similarities but it can be 
> done as a follow-up.
> 
> Thanks for taking care of this!
> 
> Le jeu. 29 juin 2017 à 23:12, Anshum Gupta  > a écrit :
> Adrien, I’ve pushed some more changes and seems like I’d have to regenerate 
> some test indexes but I’m not sure how to do that. Do you mind taking a look 
> at this in it’s current form, and also my commits? It is all @ my fork here: 
> https://github.com/anshumg/lucene-solr 
> 
> 
> P.S: I thought it’d make more sense to do this on a feature-branch but the 
> upgrade script wasn’t happy about that. 
> 
> -Anshum
> 
> 
> 
>> On Jun 29, 2017, at 9:20 AM, Anshum Gupta > > wrote:
>> 
>> Going with your suggestions, seems like we’d be wiping out all of the 
>> backward-codecs folder/package, is that correct ? Also, do we need to put in 
>> anything to ensure back-combat between 6x, and 7x?
>> 
>> -Anshum
>> 
>> 
>> 
>>> On Jun 29, 2017, at 7:21 AM, Anshum Gupta >> > wrote:
>>> 
>>> Thanks Adrien, I’d want to try and do this myself as long as you can 
>>> validate the correctness :).
>>> 
>>> I’ll be working on this in a few hours and should have an update later 
>>> today and hopefully we’d wrap it up soon.
>>> 
>>> -Anshum
>>> 
>>> 
>>> 
 On Jun 28, 2017, at 10:39 AM, Adrien Grand > wrote:
 
 If you don't want to do it, I can do it tomorrow but if you'd like to give 
 it a try I'd be happy to help if you need any guidance.
 
 Le mer. 28 juin 2017 à 19:38, Adrien Grand > a écrit :
 Hi Anshum,
 
 This looks like a good start to me. You would also need to remove the 6.x 
 version constants so that TestBackwardCompatibility does not think they 
 are worth testing, as well as all codecs, postings formats and doc values 
 formats that are defined in the lucene/backward-codecs module since they 
 are only about 6.x codecs.
 
 Le mer. 28 juin 2017 à 09:57, Anshum Gupta > a écrit :
 Thanks for confirming that Alan, I had similar thoughts but wasn’t sure. 
 
 I don’t want to change anything that I’m not confident about so I’m just 
 going to create remove those and commit it to my fork. If someone who’s 
 confident agrees with what I’m doing, I’ll go ahead and make those changes 
 to the upstream :).
 
 -Anshum
 
 
 
> On Jun 28, 2017, at 12:54 AM, Alan Woodward  > wrote:
> 
> We don’t need to support lucene5x codecs in 7, so you should be able to 
> just remove those tests 

[jira] [Commented] (SOLR-10979) Randomize PointFields in schema-docValues\*.xml and all affected tests

2017-06-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069075#comment-16069075
 ] 

ASF subversion and git services commented on SOLR-10979:


Commit 0159d494f562a5a22c8e5ed7ad412fad62b5db55 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0159d49 ]

SOLR-10979: Randomize PointFields in schema-docValues*.xml and all affected 
tests


> Randomize PointFields in schema-docValues\*.xml and all affected tests
> --
>
> Key: SOLR-10979
> URL: https://issues.apache.org/jira/browse/SOLR-10979
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Fix For: master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10979) Randomize PointFields in schema-docValues\*.xml and all affected tests

2017-06-29 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10979:
---

 Summary: Randomize PointFields in schema-docValues\*.xml and all 
affected tests
 Key: SOLR-10979
 URL: https://issues.apache.org/jira/browse/SOLR-10979
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Windows (64bit/jdk-9-ea+173) - Build # 1012 - Unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/1012/
Java: 64bit/jdk-9-ea+173 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.ltr.TestLTRScoringQuery

Error Message:
The test or suite printed 9310 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 9310 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([E177B5CBB660908D]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1716 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\temp\junit4-J0-20170629_202619_7172016758682651402527.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 5 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\temp\junit4-J1-20170629_202619_7186953012355657854564.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 294 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\test-framework\test\temp\junit4-J1-20170629_203202_59614385347221586352617.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\test-framework\test\temp\junit4-J0-20170629_203202_5963717178067173907231.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 1049 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\analysis\common\test\temp\junit4-J1-20170629_203249_5368235743999102175406.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\analysis\common\test\temp\junit4-J0-20170629_203249_53613024462463926557228.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 213 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\analysis\icu\test\temp\junit4-J1-20170629_203437_2182704485034939336439.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] 

Re: 7x, and 7.0 branches

2017-06-29 Thread Adrien Grand
Removing most backward codecs sounds good to me since the only codec that
8.0 needs to be able to read so far is the 7.0 codec which is in core. It
looks like you removed the code, but you also need to remove their
declaration in META-INF/services or the SPI loaded will try to load them
and fail since it cannot find the class.

Backcompat indexes will be added as we perform 7.x releases. However you'd
need to rename the 6.x indices from index.6.x.x to unsupported.6.x.x.

We have some specific tests like "moreterms" and "dvupdates". I think we
need to disable them for now and make sure to reenable them once 7.0 is
released.

I think the changes you did in SegmentInfos.java

are
not necessary. It looks like the version numbers are related to the current
major, but it is actually due to the fact that 7.0 is the first version to
record the version that was used at creation time. I think you can revert
changes in this file entirely. In the testIllegalCreatedVersion test, I'd
just replace 8 with 9 or Version.CURRENT.major + 1.

We'd need to remove the compatibility layer in similarities but it can be
done as a follow-up.

Thanks for taking care of this!

Le jeu. 29 juin 2017 à 23:12, Anshum Gupta  a écrit :

> Adrien, I’ve pushed some more changes and seems like I’d have to
> regenerate some test indexes but I’m not sure how to do that. Do you mind
> taking a look at this in it’s current form, and also my commits? It is all
> @ my fork here: https://github.com/anshumg/lucene-solr
>
> P.S: I thought it’d make more sense to do this on a feature-branch but the
> upgrade script wasn’t happy about that.
>
> -Anshum
>
>
>
> On Jun 29, 2017, at 9:20 AM, Anshum Gupta  wrote:
>
> Going with your suggestions, seems like we’d be wiping out all of the
> backward-codecs folder/package, is that correct ? Also, do we need to put
> in anything to ensure back-combat between 6x, and 7x?
>
> -Anshum
>
>
>
> On Jun 29, 2017, at 7:21 AM, Anshum Gupta  wrote:
>
> Thanks Adrien, I’d want to try and do this myself as long as you can
> validate the correctness :).
>
> I’ll be working on this in a few hours and should have an update later
> today and hopefully we’d wrap it up soon.
>
> -Anshum
>
>
>
> On Jun 28, 2017, at 10:39 AM, Adrien Grand  wrote:
>
> If you don't want to do it, I can do it tomorrow but if you'd like to give
> it a try I'd be happy to help if you need any guidance.
>
> Le mer. 28 juin 2017 à 19:38, Adrien Grand  a écrit :
>
>> Hi Anshum,
>>
>> This looks like a good start to me. You would also need to remove the 6.x
>> version constants so that TestBackwardCompatibility does not think they are
>> worth testing, as well as all codecs, postings formats and doc values
>> formats that are defined in the lucene/backward-codecs module since they
>> are only about 6.x codecs.
>>
>> Le mer. 28 juin 2017 à 09:57, Anshum Gupta  a écrit :
>>
>>> Thanks for confirming that Alan, I had similar thoughts but wasn’t sure.
>>>
>>> I don’t want to change anything that I’m not confident about so I’m just
>>> going to create remove those and commit it to my fork. If someone who’s
>>> confident agrees with what I’m doing, I’ll go ahead and make those changes
>>> to the upstream :).
>>>
>>> -Anshum
>>>
>>>
>>>
>>> On Jun 28, 2017, at 12:54 AM, Alan Woodward  wrote:
>>>
>>> We don’t need to support lucene5x codecs in 7, so you should be able to
>>> just remove those tests (and the the relevant packages from
>>> backwards-codecs too), I think?
>>>
>>>
>>> On 28 Jun 2017, at 08:38, Anshum Gupta  wrote:
>>>
>>> I tried to move forward to see this work before automatically computing
>>> the versions but I have about 30 odd failing test. I’ve made those changes
>>> and pushed to my local GitHub account in case you have the time to look:
>>> https://github.com/anshumg/lucene-solr
>>>
>>> Here’s the build summary if that helps:
>>>
>>>[junit4] Tests with failures [seed: 31C3B60E557C7E14] (first 10 out
>>> of 31):
>>>[junit4]   -
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testOutliers2
>>>[junit4]   -
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testShortRange
>>>[junit4]   -
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewValues
>>>[junit4]   -
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFullLongRange
>>>[junit4]   -
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testRamBytesUsed
>>>[junit4]   -
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewLargeValues
>>>[junit4]   -
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testByteRange
>>>[junit4]   -
>>> 

Re: 7x, and 7.0 branches

2017-06-29 Thread Anshum Gupta
Adrien, I’ve pushed some more changes and seems like I’d have to regenerate 
some test indexes but I’m not sure how to do that. Do you mind taking a look at 
this in it’s current form, and also my commits? It is all @ my fork here: 
https://github.com/anshumg/lucene-solr 

P.S: I thought it’d make more sense to do this on a feature-branch but the 
upgrade script wasn’t happy about that. 

-Anshum



> On Jun 29, 2017, at 9:20 AM, Anshum Gupta  wrote:
> 
> Going with your suggestions, seems like we’d be wiping out all of the 
> backward-codecs folder/package, is that correct ? Also, do we need to put in 
> anything to ensure back-combat between 6x, and 7x?
> 
> -Anshum
> 
> 
> 
>> On Jun 29, 2017, at 7:21 AM, Anshum Gupta > > wrote:
>> 
>> Thanks Adrien, I’d want to try and do this myself as long as you can 
>> validate the correctness :).
>> 
>> I’ll be working on this in a few hours and should have an update later today 
>> and hopefully we’d wrap it up soon.
>> 
>> -Anshum
>> 
>> 
>> 
>>> On Jun 28, 2017, at 10:39 AM, Adrien Grand >> > wrote:
>>> 
>>> If you don't want to do it, I can do it tomorrow but if you'd like to give 
>>> it a try I'd be happy to help if you need any guidance.
>>> 
>>> Le mer. 28 juin 2017 à 19:38, Adrien Grand >> > a écrit :
>>> Hi Anshum,
>>> 
>>> This looks like a good start to me. You would also need to remove the 6.x 
>>> version constants so that TestBackwardCompatibility does not think they are 
>>> worth testing, as well as all codecs, postings formats and doc values 
>>> formats that are defined in the lucene/backward-codecs module since they 
>>> are only about 6.x codecs.
>>> 
>>> Le mer. 28 juin 2017 à 09:57, Anshum Gupta >> > a écrit :
>>> Thanks for confirming that Alan, I had similar thoughts but wasn’t sure. 
>>> 
>>> I don’t want to change anything that I’m not confident about so I’m just 
>>> going to create remove those and commit it to my fork. If someone who’s 
>>> confident agrees with what I’m doing, I’ll go ahead and make those changes 
>>> to the upstream :).
>>> 
>>> -Anshum
>>> 
>>> 
>>> 
 On Jun 28, 2017, at 12:54 AM, Alan Woodward > wrote:
 
 We don’t need to support lucene5x codecs in 7, so you should be able to 
 just remove those tests (and the the relevant packages from 
 backwards-codecs too), I think?
 
 
> On 28 Jun 2017, at 08:38, Anshum Gupta  > wrote:
> 
> I tried to move forward to see this work before automatically computing 
> the versions but I have about 30 odd failing test. I’ve made those 
> changes and pushed to my local GitHub account in case you have the time 
> to look: https://github.com/anshumg/lucene-solr 
>  
> 
> Here’s the build summary if that helps:
> 
>[junit4] Tests with failures [seed: 31C3B60E557C7E14] (first 10 out of 
> 31):
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testOutliers2
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testShortRange
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewValues
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFullLongRange
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testRamBytesUsed
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewLargeValues
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testByteRange
>[junit4]   - 
> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testLongRange
>[junit4]   - 
> org.apache.lucene.codecs.lucene50.TestLucene50SegmentInfoFormat.testRandomExceptions
>[junit4]   - 
> org.apache.lucene.codecs.lucene62.TestLucene62SegmentInfoFormat.testRandomExceptions
>[junit4] 
>[junit4] 
>[junit4] JVM J0: 0.56 .. 9.47 = 8.91s
>[junit4] JVM J1: 0.56 .. 4.13 = 3.57s
>[junit4] JVM J2: 0.56 ..47.28 =46.73s
>[junit4] JVM J3: 0.56 .. 3.89 = 3.33s
>[junit4] Execution time total: 47 seconds
>[junit4] Tests summary: 8 suites, 215 tests, 30 errors, 1 failure, 24 
> ignored (24 assumptions)
> 
> 
> -Anshum
> 
> 
> 
>> On Jun 27, 2017, at 4:15 AM, Adrien Grand > > wrote:
>> 
>> The test***BackwardCompatibility cases can be removed since they make 
>> sure that Lucene 7 can read 

[jira] [Commented] (SOLR-9526) data_driven configs defaults to "strings" for unmapped fields, makes most fields containing "textual content" unsearchable, breaks tutorial examples

2017-06-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068936#comment-16068936
 ] 

Jan Høydahl commented on SOLR-9526:
---

[~steve_rowe] please fill in your wisdom regarding my question above :)

> data_driven configs defaults to "strings" for unmapped fields, makes most 
> fields containing "textual content" unsearchable, breaks tutorial examples
> 
>
> Key: SOLR-9526
> URL: https://issues.apache.org/jira/browse/SOLR-9526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>  Labels: dynamic-schema
> Fix For: master (7.0)
>
> Attachments: SOLR-9526.patch, SOLR-9526.patch, SOLR-9526.patch, 
> SOLR-9526.patch
>
>
> James Pritchett pointed out on the solr-user list that this sample query from 
> the quick start tutorial matched no docs (even though the tutorial text says 
> "The above request returns only one document")...
> http://localhost:8983/solr/gettingstarted/select?wt=json=true=name:foundation
> The root problem seems to be that the add-unknown-fields-to-the-schema chain 
> in data_driven_schema_configs is configured with...
> {code}
> strings
> {code}
> ...and the "strings" type uses StrField and is not tokenized.
> 
> Original thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201609.mbox/%3ccac-n2zrpsspfnk43agecspchc5b-0ff25xlfnzogyuvyg2d...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (32bit/jdk-9-ea+173) - Build # 6690 - Still Unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6690/
Java: 32bit/jdk-9-ea+173 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.handler.V2ApiIntegrationTest.testCollectionsApi

Error Message:
Error from server at http://127.0.0.1:54526/solr: 
java.nio.file.InvalidPathException: Illegal char <�> at index 53: 
C:UsersjenkinsworkspaceLucene-Solr-master-Windowssolr�uildsolr-core estJ0 
empsolr.handler.V2ApiIntegrationTest_1C22FDC53C33B0BD-001 empDir-002

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at http://127.0.0.1:54526/solr: 
java.nio.file.InvalidPathException: Illegal char <�> at index 53: 
C:UsersjenkinsworkspaceLucene-Solr-master-Windowssolr�uildsolr-core  estJ0  
 empsolr.handler.V2ApiIntegrationTest_1C22FDC53C33B0BD-001   empDir-002
at 
__randomizedtesting.SeedInfo.seed([1C22FDC53C33B0BD:C0BC1A44842AD403]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException.create(HttpSolrClient.java:805)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:600)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:239)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:470)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:400)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1102)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:843)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:774)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.handler.V2ApiIntegrationTest.testCollectionsApi(V2ApiIntegrationTest.java:141)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-10123) Analytics Component 2.0

2017-06-29 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068931#comment-16068931
 ] 

Houston Putman commented on SOLR-10123:
---

{quote}(FWIW Houston, attaching patches showing your progress/attempts makes it 
easier for people to follow along with exactly what you're doing and offer 
meaningful ideas/suggestions){quote}
I understand, I'm pretty new to this so I don't really know how this is done. 
I've unsuccessfully tried to do the patch thing before, so I'll just make a 
pull request this time.

{quote}Personally i consider it a feature of Points fields{quote}

I completely agree, and I'm glad that this is finally possible. My comment was 
more to describe how the tests would be affected by it. However it does 
introduce an interesting problem where String & Boolean fields cannot have 
duplicate values, but Numeric fields can. But that is not a huge issue.

I have changed the tests to check whether point fields are being used and to 
test accordingly, and they now pass with the randomized numeric fields in the 
schemas.

The changes are in the following pull request: 
[https://github.com/apache/lucene-solr/pull/215]

{quote}shouldn't it be???{quote}
Yes, it should be tested. There are a lot of new features in this release and 
it is going to take a while to add unit tests for all of them.

> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10123) Analytics Component 2.0

2017-06-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068930#comment-16068930
 ] 

ASF GitHub Bot commented on SOLR-10123:
---

GitHub user HoustonPutman opened a pull request:

https://github.com/apache/lucene-solr/pull/215

SOLR-10123: Fix to better support numeric PointFields in Analytics.

Unit tests now use randomized numeric fields.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HoustonPutman/lucene-solr analytics-points

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/215.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #215


commit c66d149b491bb47baac5f29101f5383aa93df280
Author: Houston Putman 
Date:   2017-06-29T19:52:08Z

SOLR-10123: Fix to better support numeric PointFields. Unit tests now use 
randomized numeric fields.




> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #215: SOLR-10123: Fix to better support numeric Poi...

2017-06-29 Thread HoustonPutman
GitHub user HoustonPutman opened a pull request:

https://github.com/apache/lucene-solr/pull/215

SOLR-10123: Fix to better support numeric PointFields in Analytics.

Unit tests now use randomized numeric fields.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HoustonPutman/lucene-solr analytics-points

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/215.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #215


commit c66d149b491bb47baac5f29101f5383aa93df280
Author: Houston Putman 
Date:   2017-06-29T19:52:08Z

SOLR-10123: Fix to better support numeric PointFields. Unit tests now use 
randomized numeric fields.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10880) Support replica filtering by flavour

2017-06-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068913#comment-16068913
 ] 

Tomás Fernández Löbbe commented on SOLR-10880:
--

I didn't look at the patch yet, but I like the idea. 
bq. What about a generic tagging possibility
+1

bq. ... Is it somehow connected to the new replication modes?
bq. No, I don't think this is related to the replication modes.
Maybe it could? We need a way to query specific types of replicas (i.e. only 
PULL replicas), this could be the way to do it too. I'll take a look at the 
patch next week


> Support replica filtering by flavour
> 
>
> Key: SOLR-10880
> URL: https://issues.apache.org/jira/browse/SOLR-10880
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Domenico Fabio Marino
>Priority: Minor
> Attachments: SOLR-10880.patch, SOLR-10880.patch
>
>
> Add a mechanism to allow queries to use only a subset of replicas(by 
> specifying the wanted replica "flavour").
> Some replicas have to be marked as "flavoured" before running the query.
> A query can specify ShardParams.SHARDS_REQUIRED_FLAVOUR to specify the 
> flavour it wants to use (Only one flavour can be specified) together with  
> ShardParams.SHARDS_CONSIDER_FLAVOURS set to true.
> The property Replica.REPLICA_FLAVOUR can be used to give a flavour to a 
> replica, the parameter it takes is a pipe ('|') separated list of flavours 
> (e.g. "chocolate|vanilla").
> The mappings between flavours is only computed when 
> ShardParams.SHARDS_CONSIDER_FLAVOURS is true, and it is computed separately 
> for each request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10710) LTR contrib failures

2017-06-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10710:
-
Priority: Major  (was: Blocker)

> LTR contrib failures
> 
>
> Key: SOLR-10710
> URL: https://issues.apache.org/jira/browse/SOLR-10710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Steve Rowe
> Fix For: master (7.0)
>
>
> Reproducing failures 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1304/] - {{git 
> bisect}} says {{06a6034d9}}, the commit on LUCENE-7730, is where the 
> {{TestFieldLengthFeature.testRanking()}} failure started:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFieldLengthFeature -Dtests.method=testRanking 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=ja-JP 
> -Dtests.timezone=America/Port_of_Spain -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J1 | TestFieldLengthFeature.testRanking <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '8'!='1' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:EB385C1332233915]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.feature.TestFieldLengthFeature.testRanking(TestFieldLengthFeature.java:117)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestParallelWeightCreation 
> -Dtests.method=testLTRScoringQueryParallelWeightCreationResultOrder 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=ar-SD 
> -Dtests.timezone=Europe/Skopje -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   1.59s J1 | 
> TestParallelWeightCreation.testLTRScoringQueryParallelWeightCreationResultOrder
>  <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '3'!='4' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:1142D5ED603B4132]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestParallelWeightCreation.testLTRScoringQueryParallelWeightCreationResultOrder(TestParallelWeightCreation.java:45)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSelectiveWeightCreation 
> -Dtests.method=testSelectiveWeightsRequestFeaturesFromDifferentStore 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=hr-HR 
> -Dtests.timezone=Australia/Victoria -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.03s J1 | 
> TestSelectiveWeightCreation.testSelectiveWeightsRequestFeaturesFromDifferentStore
>  <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '3'!='4' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:293FE248276551B1]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestSelectiveWeightCreation.testSelectiveWeightsRequestFeaturesFromDifferentStore(TestSelectiveWeightCreation.java:230)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLTRQParserPlugin -Dtests.method=ltrMoreResultsThanReRankedTest 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=es-NI 
> -Dtests.timezone=Africa/Mogadishu -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.03s J1 | 
> TestLTRQParserPlugin.ltrMoreResultsThanReRankedTest <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: 
> '0.09271725'!='0.105360515' @ response/docs/[3]/score
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:BD7644EA7596711B]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestLTRQParserPlugin.ltrMoreResultsThanReRankedTest(TestLTRQParserPlugin.java:94)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Commented] (SOLR-10710) LTR contrib failures

2017-06-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068872#comment-16068872
 ] 

Tomás Fernández Löbbe commented on SOLR-10710:
--

The tests still need improvements. The commit mostly comments out parts of the 
tests to prevent them from failing, but there is still more work to be done. We 
either create a new Jira for that work or keep this one open, I'm fine either 
way. I do think this is no longer a blocker. I'll reduce the severity to Major.

> LTR contrib failures
> 
>
> Key: SOLR-10710
> URL: https://issues.apache.org/jira/browse/SOLR-10710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Steve Rowe
>Priority: Blocker
> Fix For: master (7.0)
>
>
> Reproducing failures 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1304/] - {{git 
> bisect}} says {{06a6034d9}}, the commit on LUCENE-7730, is where the 
> {{TestFieldLengthFeature.testRanking()}} failure started:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFieldLengthFeature -Dtests.method=testRanking 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=ja-JP 
> -Dtests.timezone=America/Port_of_Spain -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J1 | TestFieldLengthFeature.testRanking <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '8'!='1' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:EB385C1332233915]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.feature.TestFieldLengthFeature.testRanking(TestFieldLengthFeature.java:117)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestParallelWeightCreation 
> -Dtests.method=testLTRScoringQueryParallelWeightCreationResultOrder 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=ar-SD 
> -Dtests.timezone=Europe/Skopje -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   1.59s J1 | 
> TestParallelWeightCreation.testLTRScoringQueryParallelWeightCreationResultOrder
>  <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '3'!='4' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:1142D5ED603B4132]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestParallelWeightCreation.testLTRScoringQueryParallelWeightCreationResultOrder(TestParallelWeightCreation.java:45)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSelectiveWeightCreation 
> -Dtests.method=testSelectiveWeightsRequestFeaturesFromDifferentStore 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=hr-HR 
> -Dtests.timezone=Australia/Victoria -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.03s J1 | 
> TestSelectiveWeightCreation.testSelectiveWeightsRequestFeaturesFromDifferentStore
>  <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '3'!='4' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:293FE248276551B1]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestSelectiveWeightCreation.testSelectiveWeightsRequestFeaturesFromDifferentStore(TestSelectiveWeightCreation.java:230)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLTRQParserPlugin -Dtests.method=ltrMoreResultsThanReRankedTest 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=es-NI 
> -Dtests.timezone=Africa/Mogadishu -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.03s J1 | 
> TestLTRQParserPlugin.ltrMoreResultsThanReRankedTest <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: 
> '0.09271725'!='0.105360515' @ response/docs/[3]/score
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:BD7644EA7596711B]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> 

[jira] [Commented] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode

2017-06-29 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068808#comment-16068808
 ] 

Shalin Shekhar Mangar commented on SOLR-10962:
--

Although from a solrconfig.xml perspective, moving commitReserveDuration to a 
top-level attribute for ReplicationHandler is fine, I prefer that we do not 
force users to add these configurations in solrconfig.xml and instead use a 
well-known name in Config API to update these settings in the same way that we 
can update autoCommit.maxTime etc. Editing solrconfig.xml by hand should never 
be the answer to solve configurability in SolrCloud going forward.

> replicationHandler's reserveCommitDuration configurable in SolrCloud mode
> -
>
> Key: SOLR-10962
> URL: https://issues.apache.org/jira/browse/SOLR-10962
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Reporter: Ramsey Haddad
>Priority: Minor
> Attachments: SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch
>
>
> With SolrCloud mode, when doing replication via IndexFetcher, we occasionally 
> see the Fetch fail and then get restarted from scratch in cases where an 
> Index file is deleted after fetch manifest is computed and before the fetch 
> actually transfers the file. The risk of this happening can be reduced with a 
> higher value of reserveCommitDuration. However, the current configuration 
> only allows this value to be adjusted for "master" mode. This change allows 
> the value to also be changed when using "SolrCloud" mode.
> https://lucene.apache.org/solr/guide/6_6/index-replication.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10977) Randomize the usage of Points based numerics in schema15.xml and all impacted tests

2017-06-29 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-10977.
-
Resolution: Fixed
  Assignee: Hoss Man

> Randomize the usage of Points based numerics in schema15.xml and all impacted 
> tests
> ---
>
> Key: SOLR-10977
> URL: https://issues.apache.org/jira/browse/SOLR-10977
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10977) Randomize the usage of Points based numerics in schema15.xml and all impacted tests

2017-06-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068799#comment-16068799
 ] 

ASF subversion and git services commented on SOLR-10977:


Commit b7fb61d7b96a594766264c5c09ef3ba22870e223 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b7fb61d ]

 SOLR-10977: Randomize the usage of Points based numerics in schema15.xml and 
all impacted tests


> Randomize the usage of Points based numerics in schema15.xml and all impacted 
> tests
> ---
>
> Key: SOLR-10977
> URL: https://issues.apache.org/jira/browse/SOLR-10977
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Fix For: master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode

2017-06-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068795#comment-16068795
 ] 

Hoss Man commented on SOLR-10962:
-

i'm not really familiar with this code -- so i'm making an assumption based on 
trust in christine that the entire premise of this issue makes sense.  Based on 
that assumption and a quick skim of the patch, i think the overall approach is 
sound, but frankly I think the {{LOG.warn("Beginning with Solr 7.0...}} line is 
too weak, and should be something like...

{code}
// remove this error check & backcompat logic when Version.LUCENE_7_0_0 is 
removed
Config.assertWarnOrFail(
  "Begining with Solr 7.0, master."+RESERVE + " is deprecated and should now be 
configured directly on the ReplicationHandler",
  (null == reserve),
  core.solrConfig.luceneMatchVersion.onOrAfter(Version.LUCENE_7_0_0));
{code}

that way:
* anyone starting with a clean (example) config will get an error if they try 
to use the old sytnax
* anyone upgrading with an old config will just get a warning - until/unless 
they change the {{}} in their solrconfig.xml t which point 
they must also change this
* once LUCENE_7_0_0 is removed from the code base, this error handling will 
stop compiling and we'll get a built in reminder that this special error 
checking (and the back compat code wrapped around it) can be removed.

(this pattern is is entire point of assertWarnOrFail)

> replicationHandler's reserveCommitDuration configurable in SolrCloud mode
> -
>
> Key: SOLR-10962
> URL: https://issues.apache.org/jira/browse/SOLR-10962
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Reporter: Ramsey Haddad
>Priority: Minor
> Attachments: SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch
>
>
> With SolrCloud mode, when doing replication via IndexFetcher, we occasionally 
> see the Fetch fail and then get restarted from scratch in cases where an 
> Index file is deleted after fetch manifest is computed and before the fetch 
> actually transfers the file. The risk of this happening can be reduced with a 
> higher value of reserveCommitDuration. However, the current configuration 
> only allows this value to be adjusted for "master" mode. This change allows 
> the value to also be changed when using "SolrCloud" mode.
> https://lucene.apache.org/solr/guide/6_6/index-replication.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10965) Implement ExecutePlanAction for autoscaling

2017-06-29 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-10965:
-
Attachment: SOLR-10965.patch

A very simple implementation of ExecutePlanAction that executes operations 
sequentially along with a test.

> Implement ExecutePlanAction for autoscaling
> ---
>
> Key: SOLR-10965
> URL: https://issues.apache.org/jira/browse/SOLR-10965
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10965.patch
>
>
> The ExecutePlanAction will use cluster operations computed by 
> ComputePlanAction and execute them against the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 935 - Unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/935/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [SolrCore, 
SolrIndexSearcher, MMapDirectory, MDCAwareThreadPoolExecutor, MMapDirectory, 
MMapDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1019)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:920)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:564)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:326)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2037)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2189)  at 
org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1071)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:949)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:920)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:564)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:482)  
at org.apache.solr.core.SolrCore.(SolrCore.java:917)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:920)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:564)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:859)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:920)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:564)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 

[jira] [Commented] (SOLR-10123) Analytics Component 2.0

2017-06-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068771#comment-16068771
 ] 

Hoss Man commented on SOLR-10123:
-

bq. The same thing would occur when a multi-valued numeric field was used in an 
expression, but that is not included in the unit tests.

shouldn't it be???

> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10123) Analytics Component 2.0

2017-06-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068767#comment-16068767
 ] 

Hoss Man commented on SOLR-10123:
-

(FWIW Houston, attaching patches showing your progress/attempts makes it easier 
for people to follow along with exactly what you're doing and offer meaningful 
ideas/suggestions)

bq. However the randomized doc-values cannot be used since docValues are 
required for almost all Analytics Component functionality.

That's fine -- if the feature requires docValues it requires docValues.  The 
main reasons the docValue randomization was added was:
* to help catch bugs/assumptions in code related to docValues
* so tests for things like facets (which work with non-dv tries, but require 
dv's for points) could do this...{code}
@BeforeClass
public static void beforeClass() throws Exception {
  // we need DVs on point fields to compute stats & facets
  if (Boolean.getBoolean(NUMERIC_POINTS_SYSPROP)) 
System.setProperty(NUMERIC_DOCVALUES_SYSPROP,"true");
{code}

bq. Almost all tests pass now, however there is a difference between 
SortedSetDocValues (TrieField) and SortedNumericDocValues (PointField) that 
might make this impossible. ...

What you're talking about is noted in SOLR-10924.  Personally i consider it a 
feature of Points fields.  

How we deal with it depends largely on what folks think the "right" behavior is 
and how it should be documented.  From an end user standpoint i think it's 
*great* -- they'll have an accurate statistical representation of the data they 
put in, and if they don't wnat duplicate values considered they shouldn't put 
the dups in. (ie: document it as a limitation of using Trie numerics, not a 
"bug" in Points)

How it affects the tests and what should be done there is a harder question 
because I have no idea how much this impacts the existing tests with your 
current working changes.

One approach is to leave the test data in place, leave the duplicate values in 
place, and account for the discrepancy in the assertions -- ala 
TestExportWriter.testDuplicates()

A diff approach would be to change the tests to ensure it didn't use duplicates 
in it's tests data, so the numbers are equivalent regardless of the underlying 
implementation.

A third option, is to eliminate the points randomization completley -- i 
wouldn't advise this unless tthe other options are for some reason completley 
impossible -- and systematically test both Trie fields and Point fields with 
diff tests that know about the diff behavior.

But as things stand right now, this jira claims the new code works with Point 
fields, but this claim is not backed up by any new testing, so _something_ 
needs to change.





> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10978) Add Javadocs for SolrJ collection admin request classes

2017-06-29 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-10978:


 Summary: Add Javadocs for SolrJ collection admin request classes
 Key: SOLR-10978
 URL: https://issues.apache.org/jira/browse/SOLR-10978
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation, SolrJ
Reporter: Shalin Shekhar Mangar
Priority: Minor


The myriad of classes under CollectionAdminRequest have little to no javadocs. 
We shouldn't be forcing people to lookup documentation for these requests from 
the ref guide all the time. I think a basic level of javadocs would be nice to 
have. Extra points for linking them to the relevant page/section of the ref 
guide for further reading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10123) Analytics Component 2.0

2017-06-29 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068518#comment-16068518
 ] 

Houston Putman edited comment on SOLR-10123 at 6/29/17 6:18 PM:


Okay, so I have updated the cloud and non-cloud schemas to add the randomized 
numeric fields. However the randomized doc-values cannot be used since 
docValues are required for almost all Analytics Component functionality.

Almost all tests pass now, however there is a difference between 
SortedSetDocValues (TrieField) and SortedNumericDocValues (PointField) that 
might make this impossible. SortedSetDocValues only store the unique set of 
values for a multi-valued field, however SortedNumericDocValues can store the 
same value multiple times for a field on the same document. Therefore analytics 
results can vary between the two. 

Imagine you have the following document
{code}
{
  id="1", 
  multi_valued_int_field=[1,1,2,2,3], 
  float_field=3
}
{code}

and were executing a facet over multi_valued_int_field, and calculating the sum 
of float_field. Ie, for every unique value in multi_valued_int_field, calculate 
the sum of float_field.

If multi_valued_int_field is of type IntPointField, then the following results 
appear

||Facet Value||Calculation||Result||Reason||
|1|3 + 3|6|value 1 appears 2 times in the multivalued field so 2 instances of 3 
are summed|
|2|3 + 3|6|value 2 appears 2 times in the multivalued field so 2 instances of 3 
are summed|
|3|3|3|value 3 appears 1 time in the multivalued field so 3 is the result|

If multi_valued_int_field is of type TrieIntField, then the following results 
appear

||Facet Value||Calculation||Result||Reason||
|1|3|3|value 1 appears 1 time in the multivalued field so 3 is the result|
|2|3|3|value 2 appears 1 time in the multivalued field so 3 is the result|
|3|3|3|value 3 appears 1 time in the multivalued field so 3 is the result|

The difference here is how IntPointField and TrieIntField are stored. 
IntPointField does not deduplicate the values in the array while TrieIntField 
does.

The same thing would occur when a multi-valued numeric field was used in an 
expression, but that is not included in the unit tests.


was (Author: houstonputman):
Okay, so I have updated the cloud and non-cloud schemas to add the randomized 
numeric fields. However the randomized doc-values cannot be used since 
docValues are required for almost all Analytics Component functionality.

Almost all tests pass now, however there is a difference between 
SortedSetDocValues (TrieField) and SortedNumericDocValues (PointField) that 
might make this impossible. SortedSetDocValues only store the unique set of 
values for a multi-valued field, however SortedNumericDocValues can store the 
same value multiple times for a field on the same document. Therefore analytics 
results can vary between the two. 

For an example, if you faceting on {{multi_valued_int_field}} and calculated 
{{sum(float_field)}} on just the following document:
{{Document = ( id="1", multi_valued_int_field=\[1,1,2,2,3\], float_field=3 )}}

If {{multi_valued_int_field}} was a {{IntPointField}}, then the results of the 
facet would be ( {{facet_value : facet_results, ...}} ):
{{1 : ( sum(float_field) = 6 ) , 2 : ( sum(float_field) = 6 ) , 3 : ( 
sum(float_field) = 3 )}}

If {{multi_valued_int_field}} was a {{TrieIntField}}, then the results of the 
facet would be ( {{facet_value : facet_results, ...}} ):
{{1 : ( sum(float_field) = 3 ) , 2 : ( sum(float_field) = 3 ) , 3 : ( 
sum(float_field) = 3 )}}

This isn't included in the unit tests, but the same thing would occur when a 
multi-valued numeric field was used in an expression. The results could be 
different.

> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression 

Re: branch_6x does not build

2017-06-29 Thread Karl Wright
Thanks, looks like the build issue is fixed!
Karl


On Thu, Jun 29, 2017 at 1:18 PM, Erick Erickson 
wrote:

> Yep, forgot to commit locally before I pushed after resolving merge
> conflicts from master. Sorry for the inconvenience. Should be fixed
> now.
>
> On Thu, Jun 29, 2017 at 4:18 AM, Karl Wright  wrote:
> > I pushed your core code fix along with my change.  I have no idea who
> broke
> > the tests; the test class itself hasn't been touched in a while, so I
> > suspect it was broken due to the changes Mr. Erickson committed.
> >
> > Karl
> >
> >
> > On Thu, Jun 29, 2017 at 6:30 AM, Mikhail Khludnev 
> wrote:
> >>
> >> Hi,
> >>
> >> I tried to fix
> >>
> >> $ git diff
> >>
> >> diff --git a/solr/core/src/java/org/apache/solr/core/SolrCore.java
> >> b/solr/core/src/java/org/apache/solr/core/SolrCore.java
> >>
> >> index c02c748..e60f9dd 100644
> >>
> >> --- a/solr/core/src/java/org/apache/solr/core/SolrCore.java
> >>
> >> +++ b/solr/core/src/java/org/apache/solr/core/SolrCore.java
> >>
> >> @@ -2833,7 +2833,7 @@ public final class SolrCore implements
> >> SolrInfoMBean, SolrMetricProducer, Closea
> >>
> >>  CoreDescriptor cd = getCoreDescriptor();
> >>
> >>  if (cd != null) {
> >>
> >>if (coreContainer != null) {
> >>
> >> -lst.add("aliases", coreContainer.getCoreNames(this));
> >>
> >> +lst.add("aliases", coreContainer.getNamesForCore(this));
> >>
> >>}
> >>
> >>CloudDescriptor cloudDesc = cd.getCloudDescriptor();
> >>
> >>if (cloudDesc != null) {
> >>
> >>
> >> but got the next compile errors
> >>
> >> common.compile-test:
> >>
> >> [mkdir] Created dir:
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/build/
> solr-core/classes/test
> >>
> >> [javac] Compiling 785 source files to
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/build/
> solr-core/classes/test
> >>
> >> [javac]
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/
> test/org/apache/solr/cloud/BasicDistributedZkTest.java:524:
> >> error: cannot find symbol
> >>
> >> [javac] JettySolrRunner jetty = jettys.get(0);
> >>
> >> [javac] ^
> >>
> >> [javac]   symbol:   class JettySolrRunner
> >>
> >> [javac]   location: class BasicDistributedZkTest
> >>
> >> [javac]
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/
> test/org/apache/solr/cloud/BasicDistributedZkTest.java:569:
> >> error: cannot find symbol
> >>
> >> [javac]   assertEquals(0,
> >> CollectionAdminRequest.createCollection(collection, "conf1",
> numShards, 1)
> >>
> >> [javac]   ^
> >>
> >> [javac]   symbol:   variable CollectionAdminRequest
> >>
> >> [javac]   location: class BasicDistributedZkTest
> >>
> >> [javac]
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/
> test/org/apache/solr/cloud/BasicDistributedZkTest.java:584:
> >> error: cannot find symbol
> >>
> >> [javac]
> >> assertTrue(CollectionAdminRequest.addReplicaToShard(collection,
> >> "shard"+((freezeI%numShards)+1))
> >>
> >> [javac]  ^
> >>
> >> [javac]   symbol:   variable CollectionAdminRequest
> >>
> >> [javac]   location: class BasicDistributedZkTest
> >>
> >> [javac]
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/
> test/org/apache/solr/cloud/UnloadDistributedZkTest.java:118:
> >> error: cannot find symbol
> >>
> >> [javac] SolrClient client = clients.get(0);
> >>
> >> [javac] ^
> >>
> >> [javac]   symbol:   class SolrClient
> >>
> >> [javac]   location: class UnloadDistributedZkTest
> >>
> >> [javac]
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/
> test/org/apache/solr/cloud/UnloadDistributedZkTest.java:170:
> >> error: cannot find symbol
> >>
> >> [javac] SolrClient client = clients.get(0);
> >>
> >> [javac] ^
> >>
> >> [javac]   symbol:   class SolrClient
> >>
> >> [javac]   location: class UnloadDistributedZkTest
> >>
> >> [javac]
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/
> test/org/apache/solr/cloud/UnloadDistributedZkTest.java:383:
> >> error: cannot find symbol
> >>
> >> [javac] JettySolrRunner jetty = jettys.get(0);
> >>
> >> [javac] ^
> >>
> >> [javac]   symbol:   class JettySolrRunner
> >>
> >> [javac]   location: class UnloadDistributedZkTest
> >>
> >> [javac] Note: Some input files use or override a deprecated API.
> >>
> >> [javac] Note: Recompile with -Xlint:deprecation for details.
> >>
> >> [javac] Note: Some input files use unchecked or unsafe operations.
> >>
> >> [javac] Note: Recompile with -Xlint:unchecked for details.
> >>
> >> [javac] 6 errors
> >>
> >>
> >> common.compile-test:
> >>
> >> [javac] Compiling 737 source files to
> >> /home/mikhail_khlud...@epam.com/lucene-solr/solr/build/
> solr-core/classes/test
> >>
> >> [javac]
> >> 

[jira] [Comment Edited] (SOLR-10123) Analytics Component 2.0

2017-06-29 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068518#comment-16068518
 ] 

Houston Putman edited comment on SOLR-10123 at 6/29/17 5:35 PM:


Okay, so I have updated the cloud and non-cloud schemas to add the randomized 
numeric fields. However the randomized doc-values cannot be used since 
docValues are required for almost all Analytics Component functionality.

Almost all tests pass now, however there is a difference between 
SortedSetDocValues (TrieField) and SortedNumericDocValues (PointField) that 
might make this impossible. SortedSetDocValues only store the unique set of 
values for a multi-valued field, however SortedNumericDocValues can store the 
same value multiple times for a field on the same document. Therefore analytics 
results can vary between the two. 

For an example, if you faceting on {{multi_valued_int_field}} and calculated 
{{sum(float_field)}} on just the following document:
{{Document = ( id="1", multi_valued_int_field=\[1,1,2,2,3\], float_field=3 )}}

If {{multi_valued_int_field}} was a {{IntPointField}}, then the results of the 
facet would be ( {{facet_value : facet_results, ...}} ):
{{1 : ( sum(float_field) = 6 ) , 2 : ( sum(float_field) = 6 ) , 3 : ( 
sum(float_field) = 3 )}}

If {{multi_valued_int_field}} was a {{TrieIntField}}, then the results of the 
facet would be ( {{facet_value : facet_results, ...}} ):
{{1 : ( sum(float_field) = 3 ) , 2 : ( sum(float_field) = 3 ) , 3 : ( 
sum(float_field) = 3 )}}

This isn't included in the unit tests, but the same thing would occur when a 
multi-valued numeric field was used in an expression. The results could be 
different.


was (Author: houstonputman):
Okay, so I have updated the cloud and non-cloud schemas to add the randomized 
numeric fields. However the randomized doc-values cannot be used since 
docValues are required for almost all Analytics Component functionality.

Almost all tests pass now, however there is a difference between 
SortedSetDocValues (TrieField) and SortedNumericDocValues (PointField) that 
might make this impossible. SortedSetDocValues only store the unique set of 
values for a multi-valued field, however SortedNumericDocValues can store the 
same value multiple times for a field on the same document. Therefore analytics 
results can vary between the two. 

For an example, if you faceting on {{multi_valued_int_field}} and calculated 
{{sum(float_field)}} on just the following document:
{{Document = ( id="1", multi_valued_int_field=\[1,1,2,2,3\], float_field=3 )}}

If {{multi_valued_int_field}} was a {{IntPointField}}, then the results of the 
facet would be:
{{1 : ( sum(float_field) = 6 ) , 2 : ( sum(float_field) = 6 ) , 3 : ( 
sum(float_field) = 3 )}}

If {{multi_valued_int_field}} was a {{TrieIntField}}, then the results of the 
facet would be:
{{1 : ( sum(float_field) = 3 ) , 2 : ( sum(float_field) = 3 ) , 3 : ( 
sum(float_field) = 3 )}}

This isn't included in the unit tests, but the same thing would occur when a 
multi-valued numeric field was used in an expression. The results could be 
different.

> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode

2017-06-29 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068662#comment-16068662
 ] 

Christine Poerschke commented on SOLR-10962:


precommit and solr core tests pass. I think this is good to go but would 
appreciate further pairs of eyes on this since the change concerns a long 
established config element.

Replication was added by SOLR-561 in 2008, [~shalinmangar] - any thoughts?

[~hossman] - you're always good and watchful w.r.t. config deprecation issues - 
do you think the proposed route to add a top-level _commitReserveDuration_ 
element and to deprecate the _master.commitReserveDuration_ sub-element makes 
sense?

> replicationHandler's reserveCommitDuration configurable in SolrCloud mode
> -
>
> Key: SOLR-10962
> URL: https://issues.apache.org/jira/browse/SOLR-10962
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Reporter: Ramsey Haddad
>Priority: Minor
> Attachments: SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch
>
>
> With SolrCloud mode, when doing replication via IndexFetcher, we occasionally 
> see the Fetch fail and then get restarted from scratch in cases where an 
> Index file is deleted after fetch manifest is computed and before the fetch 
> actually transfers the file. The risk of this happening can be reduced with a 
> higher value of reserveCommitDuration. However, the current configuration 
> only allows this value to be adjusted for "master" mode. This change allows 
> the value to also be changed when using "SolrCloud" mode.
> https://lucene.apache.org/solr/guide/6_6/index-replication.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1922 - Unstable

2017-06-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1922/

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([3904CCCAA9561302:B150F31007AA7EFA]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at org.apache.solr.cloud.ReplaceNodeTest.test(ReplaceNodeTest.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11223 lines...]
   [junit4] Suite: org.apache.solr.cloud.ReplaceNodeTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J0/temp/solr.cloud.ReplaceNodeTest_3904CCCAA9561302-001/init-core-data-001
   [junit4]   2> 293639 WARN  
(SUITE-ReplaceNodeTest-seed#[3904CCCAA9561302]-worker) [] 

[jira] [Commented] (SOLR-1364) Distributed search return Solr shard header information (like qtime)

2017-06-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068653#comment-16068653
 ] 

Erick Erickson commented on SOLR-1364:
--

Also isn't some of this info returned if you set shards.info=true?

> Distributed search return Solr shard header information (like qtime)
> 
>
> Key: SOLR-1364
> URL: https://issues.apache.org/jira/browse/SOLR-1364
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 1.4
>Reporter: Jason Rutherglen
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-1364.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Distributed queries can expose the Solr shard query information
> such as qtime. The aggregate qtime can be broken up into the
> time required for each stage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: branch_6x does not build

2017-06-29 Thread Erick Erickson
Yep, forgot to commit locally before I pushed after resolving merge
conflicts from master. Sorry for the inconvenience. Should be fixed
now.

On Thu, Jun 29, 2017 at 4:18 AM, Karl Wright  wrote:
> I pushed your core code fix along with my change.  I have no idea who broke
> the tests; the test class itself hasn't been touched in a while, so I
> suspect it was broken due to the changes Mr. Erickson committed.
>
> Karl
>
>
> On Thu, Jun 29, 2017 at 6:30 AM, Mikhail Khludnev  wrote:
>>
>> Hi,
>>
>> I tried to fix
>>
>> $ git diff
>>
>> diff --git a/solr/core/src/java/org/apache/solr/core/SolrCore.java
>> b/solr/core/src/java/org/apache/solr/core/SolrCore.java
>>
>> index c02c748..e60f9dd 100644
>>
>> --- a/solr/core/src/java/org/apache/solr/core/SolrCore.java
>>
>> +++ b/solr/core/src/java/org/apache/solr/core/SolrCore.java
>>
>> @@ -2833,7 +2833,7 @@ public final class SolrCore implements
>> SolrInfoMBean, SolrMetricProducer, Closea
>>
>>  CoreDescriptor cd = getCoreDescriptor();
>>
>>  if (cd != null) {
>>
>>if (coreContainer != null) {
>>
>> -lst.add("aliases", coreContainer.getCoreNames(this));
>>
>> +lst.add("aliases", coreContainer.getNamesForCore(this));
>>
>>}
>>
>>CloudDescriptor cloudDesc = cd.getCloudDescriptor();
>>
>>if (cloudDesc != null) {
>>
>>
>> but got the next compile errors
>>
>> common.compile-test:
>>
>> [mkdir] Created dir:
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/build/solr-core/classes/test
>>
>> [javac] Compiling 785 source files to
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/build/solr-core/classes/test
>>
>> [javac]
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:524:
>> error: cannot find symbol
>>
>> [javac] JettySolrRunner jetty = jettys.get(0);
>>
>> [javac] ^
>>
>> [javac]   symbol:   class JettySolrRunner
>>
>> [javac]   location: class BasicDistributedZkTest
>>
>> [javac]
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:569:
>> error: cannot find symbol
>>
>> [javac]   assertEquals(0,
>> CollectionAdminRequest.createCollection(collection, "conf1", numShards, 1)
>>
>> [javac]   ^
>>
>> [javac]   symbol:   variable CollectionAdminRequest
>>
>> [javac]   location: class BasicDistributedZkTest
>>
>> [javac]
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:584:
>> error: cannot find symbol
>>
>> [javac]
>> assertTrue(CollectionAdminRequest.addReplicaToShard(collection,
>> "shard"+((freezeI%numShards)+1))
>>
>> [javac]  ^
>>
>> [javac]   symbol:   variable CollectionAdminRequest
>>
>> [javac]   location: class BasicDistributedZkTest
>>
>> [javac]
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:118:
>> error: cannot find symbol
>>
>> [javac] SolrClient client = clients.get(0);
>>
>> [javac] ^
>>
>> [javac]   symbol:   class SolrClient
>>
>> [javac]   location: class UnloadDistributedZkTest
>>
>> [javac]
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:170:
>> error: cannot find symbol
>>
>> [javac] SolrClient client = clients.get(0);
>>
>> [javac] ^
>>
>> [javac]   symbol:   class SolrClient
>>
>> [javac]   location: class UnloadDistributedZkTest
>>
>> [javac]
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:383:
>> error: cannot find symbol
>>
>> [javac] JettySolrRunner jetty = jettys.get(0);
>>
>> [javac] ^
>>
>> [javac]   symbol:   class JettySolrRunner
>>
>> [javac]   location: class UnloadDistributedZkTest
>>
>> [javac] Note: Some input files use or override a deprecated API.
>>
>> [javac] Note: Recompile with -Xlint:deprecation for details.
>>
>> [javac] Note: Some input files use unchecked or unsafe operations.
>>
>> [javac] Note: Recompile with -Xlint:unchecked for details.
>>
>> [javac] 6 errors
>>
>>
>> common.compile-test:
>>
>> [javac] Compiling 737 source files to
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/build/solr-core/classes/test
>>
>> [javac]
>> /home/mikhail_khlud...@epam.com/lucene-solr/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:587:
>> error: cannot find symbol
>>
>> [javac]   .setCoreName(collection + freezeI)
>>
>> [javac]   ^
>>
>> [javac]   symbol:   method setCoreName(String)
>>
>> [javac]   location: class AddReplica
>>
>>
>> On Thu, Jun 29, 2017 at 12:28 PM, Karl Wright  wrote:
>>>
>>> Problem is the following lines:
>>>

[jira] [Commented] (SOLR-10974) Replication - Unable to download tlog

2017-06-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068637#comment-16068637
 ] 

Erick Erickson commented on SOLR-10974:
---

Please raise this question on the user's list at solr-u...@lucene.apache.org, 
see: (http://lucene.apache.org/solr/community.html#mailing-lists-irc) there are 
a _lot_ more people watching that list who may be able to help. 

If it's determined that this is a code issue in Solr and not a 
configuration/usage problem, we can raise a JIRA.


As it stands, there is very little information to go on here. Exactly _how_ did 
you "activate the replication of a shard via the web interface,"? 

> Replication - Unable to download tlog
> -
>
> Key: SOLR-10974
> URL: https://issues.apache.org/jira/browse/SOLR-10974
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Affects Versions: 6.7
> Environment: Redhat 7.2
>Reporter: Rénald Koch
>
> When I activate the replication of a shard via the web interface, the data 
> will replicate well on the new shard, but once all the data has been copied, 
> the data will be erased and the synchronization will start again indefinitely.
> When I look in the logs, I have this error:
> 2017-06-29 10:51:39.768 ERROR 
> (recoveryExecutor-3-thread-1-processing-n:X.X.X.X:8983_solr 
> x:collection_shard2_replica2 s:shard2 c:collection r:core_node4) 
> [c:collection s:shard2 r:core_node4 x:collection_shard2_replica2] 
> o.a.s.h.ReplicationHandler Index fetch failed 
> :org.apache.solr.common.SolrException: Unable to download 
> tlog.2131263.1571535118797897728 completely. Downloaded 0!=871
>at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.cleanup(IndexFetcher.java:1591)
>at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetch(IndexFetcher.java:1474)
>at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1449)
>at 
> org.apache.solr.handler.IndexFetcher.downloadTlogFiles(IndexFetcher.java:893)
>at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:494)
>at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:301)
>at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:400)
>at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:219)
>at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:471)
>at 
> org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:284)
>at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>at java.lang.Thread.run(Thread.java:748)
> I tried to extend the tlog retention time (especially with the 
> commitReserveDuration option), but it does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10977) Randomize the usage of Points based numerics in schema15.xml and all impacted tests

2017-06-29 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10977:
---

 Summary: Randomize the usage of Points based numerics in 
schema15.xml and all impacted tests
 Key: SOLR-10977
 URL: https://issues.apache.org/jira/browse/SOLR-10977
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+175) - Build # 20015 - Unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20015/
Java: 32bit/jdk-9-ea+175 -server -XX:+UseConcMarkSweepGC --illegal-access=deny

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionAddReplica

Error Message:
Error from server at http://127.0.0.1:34665/solr: delete the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:34665/solr: delete the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([60D4DDED3C42E10E:E0F4B8C32D0109A8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:624)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:239)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:470)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:400)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1102)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:843)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:774)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:442)
at 
org.apache.solr.cloud.autoscaling.TestPolicyCloud.after(TestPolicyCloud.java:63)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:965)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 975 - Still Failing!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/975/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 10886 lines...]
[javac] Compiling 785 source files to 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/build/solr-core/classes/test
[javac] 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:524:
 error: cannot find symbol
[javac] JettySolrRunner jetty = jettys.get(0);
[javac] ^
[javac]   symbol:   class JettySolrRunner
[javac]   location: class BasicDistributedZkTest
[javac] 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:569:
 error: cannot find symbol
[javac]   assertEquals(0, 
CollectionAdminRequest.createCollection(collection, "conf1", numShards, 1)
[javac]   ^
[javac]   symbol:   variable CollectionAdminRequest
[javac]   location: class BasicDistributedZkTest
[javac] 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:584:
 error: cannot find symbol
[javac]   
assertTrue(CollectionAdminRequest.addReplicaToShard(collection, 
"shard"+((freezeI%numShards)+1))
[javac]  ^
[javac]   symbol:   variable CollectionAdminRequest
[javac]   location: class BasicDistributedZkTest
[javac] 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:118:
 error: cannot find symbol
[javac] SolrClient client = clients.get(0);
[javac] ^
[javac]   symbol:   class SolrClient
[javac]   location: class UnloadDistributedZkTest
[javac] 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:170:
 error: cannot find symbol
[javac] SolrClient client = clients.get(0);
[javac] ^
[javac]   symbol:   class SolrClient
[javac]   location: class UnloadDistributedZkTest
[javac] 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:383:
 error: cannot find symbol
[javac] JettySolrRunner jetty = jettys.get(0);
[javac] ^
[javac]   symbol:   class JettySolrRunner
[javac]   location: class UnloadDistributedZkTest
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:810: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:754: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:59: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/build.xml:267: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/common-build.xml:549: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:795: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:807: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:1967: 
Compile failed; see the compiler error output for details.

Total time: 17 minutes 26 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-10910) Clean up a few details left over from pluggable transient core and untangling CoreDescriptor/CoreContainer references

2017-06-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068558#comment-16068558
 ] 

Erick Erickson edited comment on SOLR-10910 at 6/29/17 4:28 PM:


Commit 9947a811e83cc0f848f9ddaa37a4137f19efff1a in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9947a81 ]

SOLR-10910: Clean up a few details left over from pluggable transient core and 
untangling CoreDescriptor/CoreContainer references, didn't commit after merging 
and before I pushed last night



was (Author: jira-bot):
Commit 9947a811e83cc0f848f9ddaa37a4137f19efff1a in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9947a81 ]

SOLR-10910: Clean up a few details left over from pluggable transient core and 
untangling CoreDescriptor/CoreContainer references, didn't commit before 
merging last night


> Clean up a few details left over from pluggable transient core and untangling 
> CoreDescriptor/CoreContainer references
> -
>
> Key: SOLR-10910
> URL: https://issues.apache.org/jira/browse/SOLR-10910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10910.patch, SOLR-10910.patch
>
>
> There are a few bits of the code from SOLR-10007, SOLR-8906 that could stand 
> some cleanup. For instance, the TransientSolrCoreCache is rather awkwardly 
> hanging around in CoreContainer and would fit more naturally in SolrCores.
> What I've seen so far shouldn't result in incorrect behavior, just cleaning 
> up for the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-06-29 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068569#comment-16068569
 ] 

Ishan Chattopadhyaya commented on SOLR-10272:
-

Thanks Steve. I think it was the case of Solaris kernel returning list of files 
in a different order.

Shalin, your suggestion is implemented; thanks! Now, *if a unit test doesn't 
specify a configset name while creating a collection, it will use the _default 
configset*, not conf1. Also, *any change to the _default configset would need 
to go into two places*: the user's _default configset, i.e. 
server/solr/configsets/_default, and the _default configset in 
solr/core/test-files/_default.*

> Use a default configset and make the configName parameter optional.
> ---
>
> Key: SOLR-10272
> URL: https://issues.apache.org/jira/browse/SOLR-10272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10272.patch, SOLR-10272.patch.gz, 
> SOLR-10272.patch.gz, SOLR-10272.patch.gz
>
>
> This Jira's motivation is to improve the creating a collection experience 
> better for users.
> To create a collection we need to specify a configName that needs to be 
> present in ZK. When a new user is starting Solr why should he worry about 
> having to know about configsets before he can can create a collection.
> When you create a collection using "bin/solr create" the script uploads a 
> configset and references it. This is great. We should extend this idea to API 
> users as well.
> So here is the rough outline of what I think we can do here:
> 1. When you start solr , the bin script checks to see if 
> "/configs/_baseConfigSet" znode is present . If not it uploads the 
> "basic_configs". 
> We can discuss if its the "basic_configs" or something other default config 
> set. 
> Also we can discuss the name for "/_baseConfigSet". Moving on though
> 2. When a user creates a collection from the API  
> {{admin/collections?action=CREATE=gettingstarted}} here is what we do :
> Use https://cwiki.apache.org/confluence/display/solr/ConfigSets+API to copy 
> over the default config set to a configset with the name of the collection 
> specified.
> collection.configName can truly be an optional parameter. If its specified we 
> don't need to do this step.
> 3. Have the bin scripts use this and remove the logic built in there to do 
> the same thing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_131) - Build # 1011 - Still Failing!

2017-06-29 Thread Erick Erickson
Apologies, I forgot to add/commit the merge reconciliations before
pushing, fixed now.

On Thu, Jun 29, 2017 at 7:48 AM, Policeman Jenkins Server
 wrote:
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/1011/
> Java: 32bit/jdk1.8.0_131 -server -XX:+UseConcMarkSweepGC
>
> All tests passed
>
> Build Log:
> [...truncated 10948 lines...]
> [javac] Compiling 785 source files to 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\classes\test
> [javac] 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\BasicDistributedZkTest.java:524:
>  error: cannot find symbol
> [javac] JettySolrRunner jetty = jettys.get(0);
> [javac] ^
> [javac]   symbol:   class JettySolrRunner
> [javac]   location: class BasicDistributedZkTest
> [javac] 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\BasicDistributedZkTest.java:569:
>  error: cannot find symbol
> [javac]   assertEquals(0, 
> CollectionAdminRequest.createCollection(collection, "conf1", numShards, 1)
> [javac]   ^
> [javac]   symbol:   variable CollectionAdminRequest
> [javac]   location: class BasicDistributedZkTest
> [javac] 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\BasicDistributedZkTest.java:584:
>  error: cannot find symbol
> [javac]   
> assertTrue(CollectionAdminRequest.addReplicaToShard(collection, 
> "shard"+((freezeI%numShards)+1))
> [javac]  ^
> [javac]   symbol:   variable CollectionAdminRequest
> [javac]   location: class BasicDistributedZkTest
> [javac] 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\UnloadDistributedZkTest.java:118:
>  error: cannot find symbol
> [javac] SolrClient client = clients.get(0);
> [javac] ^
> [javac]   symbol:   class SolrClient
> [javac]   location: class UnloadDistributedZkTest
> [javac] 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\UnloadDistributedZkTest.java:170:
>  error: cannot find symbol
> [javac] SolrClient client = clients.get(0);
> [javac] ^
> [javac]   symbol:   class SolrClient
> [javac]   location: class UnloadDistributedZkTest
> [javac] 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\UnloadDistributedZkTest.java:383:
>  error: cannot find symbol
> [javac] JettySolrRunner jetty = jettys.get(0);
> [javac] ^
> [javac]   symbol:   class JettySolrRunner
> [javac]   location: class UnloadDistributedZkTest
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] Note: Some input files use unchecked or unsafe operations.
> [javac] Note: Recompile with -Xlint:unchecked for details.
> [javac] 6 errors
>
> BUILD FAILED
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:810: The 
> following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:754: The 
> following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:59: The 
> following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build.xml:267: The 
> following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\common-build.xml:549: 
> The following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\common-build.xml:795:
>  The following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\common-build.xml:807:
>  The following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\common-build.xml:1967:
>  Compile failed; see the compiler error output for details.
>
> Total time: 19 minutes 11 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-06-29 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064780#comment-16064780
 ] 

Ishan Chattopadhyaya edited comment on SOLR-10272 at 6/29/17 4:23 PM:
--

-Just a note on errors like: "This is the _default configset, which is designed 
to throw error upon collection creation."-

-Almost all tests either use the "conf1" configset (which is uploaded 
initially) or explicitly upload a configset and use that. However, almost all 
CREATE commands did not specify the configset name. After this change, if 
configset name is not specified, the _default configset would be used. Hence, 
*if you see this error*, it means you inadvertently used a _default configset 
and you should modify your test to explicitly specify your configset name while 
creating a collection (usually this is called "conf" or "conf1"). The _default 
configset available to the test-framework is a bogus one that deliberately 
throws that error so that no one inadvertently uses it and instead explicitly 
specifies the required configset name.-

This is no longer the case after implementing the idea which Shalin proposed in 
the subsequent comment. 


was (Author: ichattopadhyaya):
Just a note on errors like: "This is the _default configset, which is designed 
to throw error upon collection creation."

Almost all tests either use the "conf1" configset (which is uploaded initially) 
or explicitly upload a configset and use that. However, almost all CREATE 
commands did not specify the configset name. After this change, if configset 
name is not specified, the _default configset would be used. Hence, *if you see 
this error*, it means you inadvertently used a _default configset and you 
should modify your test to explicitly specify your configset name while 
creating a collection (usually this is called "conf" or "conf1"). The _default 
configset available to the test-framework is a bogus one that deliberately 
throws that error so that no one inadvertently uses it and instead explicitly 
specifies the required configset name.

> Use a default configset and make the configName parameter optional.
> ---
>
> Key: SOLR-10272
> URL: https://issues.apache.org/jira/browse/SOLR-10272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10272.patch, SOLR-10272.patch.gz, 
> SOLR-10272.patch.gz, SOLR-10272.patch.gz
>
>
> This Jira's motivation is to improve the creating a collection experience 
> better for users.
> To create a collection we need to specify a configName that needs to be 
> present in ZK. When a new user is starting Solr why should he worry about 
> having to know about configsets before he can can create a collection.
> When you create a collection using "bin/solr create" the script uploads a 
> configset and references it. This is great. We should extend this idea to API 
> users as well.
> So here is the rough outline of what I think we can do here:
> 1. When you start solr , the bin script checks to see if 
> "/configs/_baseConfigSet" znode is present . If not it uploads the 
> "basic_configs". 
> We can discuss if its the "basic_configs" or something other default config 
> set. 
> Also we can discuss the name for "/_baseConfigSet". Moving on though
> 2. When a user creates a collection from the API  
> {{admin/collections?action=CREATE=gettingstarted}} here is what we do :
> Use https://cwiki.apache.org/confluence/display/solr/ConfigSets+API to copy 
> over the default config set to a configset with the name of the collection 
> specified.
> collection.configName can truly be an optional parameter. If its specified we 
> don't need to do this step.
> 3. Have the bin scripts use this and remove the logic built in there to do 
> the same thing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-06-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068560#comment-16068560
 ] 

ASF subversion and git services commented on SOLR-10272:


Commit 46bfd9cf7e9da99e936fe986af88f4ab47d7fe33 in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=46bfd9c ]

SOLR-10272: Fix for test failure, while comparing directory contents of 
_default configsets


> Use a default configset and make the configName parameter optional.
> ---
>
> Key: SOLR-10272
> URL: https://issues.apache.org/jira/browse/SOLR-10272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10272.patch, SOLR-10272.patch.gz, 
> SOLR-10272.patch.gz, SOLR-10272.patch.gz
>
>
> This Jira's motivation is to improve the creating a collection experience 
> better for users.
> To create a collection we need to specify a configName that needs to be 
> present in ZK. When a new user is starting Solr why should he worry about 
> having to know about configsets before he can can create a collection.
> When you create a collection using "bin/solr create" the script uploads a 
> configset and references it. This is great. We should extend this idea to API 
> users as well.
> So here is the rough outline of what I think we can do here:
> 1. When you start solr , the bin script checks to see if 
> "/configs/_baseConfigSet" znode is present . If not it uploads the 
> "basic_configs". 
> We can discuss if its the "basic_configs" or something other default config 
> set. 
> Also we can discuss the name for "/_baseConfigSet". Moving on though
> 2. When a user creates a collection from the API  
> {{admin/collections?action=CREATE=gettingstarted}} here is what we do :
> Use https://cwiki.apache.org/confluence/display/solr/ConfigSets+API to copy 
> over the default config set to a configset with the name of the collection 
> specified.
> collection.configName can truly be an optional parameter. If its specified we 
> don't need to do this step.
> 3. Have the bin scripts use this and remove the logic built in there to do 
> the same thing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10910) Clean up a few details left over from pluggable transient core and untangling CoreDescriptor/CoreContainer references

2017-06-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068558#comment-16068558
 ] 

ASF subversion and git services commented on SOLR-10910:


Commit 9947a811e83cc0f848f9ddaa37a4137f19efff1a in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9947a81 ]

SOLR-10910: Clean up a few details left over from pluggable transient core and 
untangling CoreDescriptor/CoreContainer references, didn't commit before 
merging last night


> Clean up a few details left over from pluggable transient core and untangling 
> CoreDescriptor/CoreContainer references
> -
>
> Key: SOLR-10910
> URL: https://issues.apache.org/jira/browse/SOLR-10910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10910.patch, SOLR-10910.patch
>
>
> There are a few bits of the code from SOLR-10007, SOLR-8906 that could stand 
> some cleanup. For instance, the TransientSolrCoreCache is rather awkwardly 
> hanging around in CoreContainer and would fit more naturally in SolrCores.
> What I've seen so far shouldn't result in incorrect behavior, just cleaning 
> up for the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7x, and 7.0 branches

2017-06-29 Thread Anshum Gupta
Going with your suggestions, seems like we’d be wiping out all of the 
backward-codecs folder/package, is that correct ? Also, do we need to put in 
anything to ensure back-combat between 6x, and 7x?

-Anshum



> On Jun 29, 2017, at 7:21 AM, Anshum Gupta  wrote:
> 
> Thanks Adrien, I’d want to try and do this myself as long as you can validate 
> the correctness :).
> 
> I’ll be working on this in a few hours and should have an update later today 
> and hopefully we’d wrap it up soon.
> 
> -Anshum
> 
> 
> 
>> On Jun 28, 2017, at 10:39 AM, Adrien Grand > > wrote:
>> 
>> If you don't want to do it, I can do it tomorrow but if you'd like to give 
>> it a try I'd be happy to help if you need any guidance.
>> 
>> Le mer. 28 juin 2017 à 19:38, Adrien Grand > > a écrit :
>> Hi Anshum,
>> 
>> This looks like a good start to me. You would also need to remove the 6.x 
>> version constants so that TestBackwardCompatibility does not think they are 
>> worth testing, as well as all codecs, postings formats and doc values 
>> formats that are defined in the lucene/backward-codecs module since they are 
>> only about 6.x codecs.
>> 
>> Le mer. 28 juin 2017 à 09:57, Anshum Gupta > > a écrit :
>> Thanks for confirming that Alan, I had similar thoughts but wasn’t sure. 
>> 
>> I don’t want to change anything that I’m not confident about so I’m just 
>> going to create remove those and commit it to my fork. If someone who’s 
>> confident agrees with what I’m doing, I’ll go ahead and make those changes 
>> to the upstream :).
>> 
>> -Anshum
>> 
>> 
>> 
>>> On Jun 28, 2017, at 12:54 AM, Alan Woodward >> > wrote:
>>> 
>>> We don’t need to support lucene5x codecs in 7, so you should be able to 
>>> just remove those tests (and the the relevant packages from 
>>> backwards-codecs too), I think?
>>> 
>>> 
 On 28 Jun 2017, at 08:38, Anshum Gupta > wrote:
 
 I tried to move forward to see this work before automatically computing 
 the versions but I have about 30 odd failing test. I’ve made those changes 
 and pushed to my local GitHub account in case you have the time to look: 
 https://github.com/anshumg/lucene-solr 
  
 
 Here’s the build summary if that helps:
 
[junit4] Tests with failures [seed: 31C3B60E557C7E14] (first 10 out of 
 31):
[junit4]   - 
 org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testOutliers2
[junit4]   - 
 org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testShortRange
[junit4]   - 
 org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewValues
[junit4]   - 
 org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFullLongRange
[junit4]   - 
 org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testRamBytesUsed
[junit4]   - 
 org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewLargeValues
[junit4]   - 
 org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testByteRange
[junit4]   - 
 org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testLongRange
[junit4]   - 
 org.apache.lucene.codecs.lucene50.TestLucene50SegmentInfoFormat.testRandomExceptions
[junit4]   - 
 org.apache.lucene.codecs.lucene62.TestLucene62SegmentInfoFormat.testRandomExceptions
[junit4] 
[junit4] 
[junit4] JVM J0: 0.56 .. 9.47 = 8.91s
[junit4] JVM J1: 0.56 .. 4.13 = 3.57s
[junit4] JVM J2: 0.56 ..47.28 =46.73s
[junit4] JVM J3: 0.56 .. 3.89 = 3.33s
[junit4] Execution time total: 47 seconds
[junit4] Tests summary: 8 suites, 215 tests, 30 errors, 1 failure, 24 
 ignored (24 assumptions)
 
 
 -Anshum
 
 
 
> On Jun 27, 2017, at 4:15 AM, Adrien Grand  > wrote:
> 
> The test***BackwardCompatibility cases can be removed since they make 
> sure that Lucene 7 can read Lucene 6 norms, while Lucene 8 doesn't have 
> to be able to read Lucene 6 norms.
> 
> TestSegmentInfos needs to be adapted to the new versions, we need to 
> replace 5 with 6 and 8 with 9. Maybe we should compute those numbers 
> automatically based on Version.LATEST.major so that it does not require 
> manual changes when moving to a new major version. That would give 5 -> 
> Version.LATEST.major-2 and 8 -> Version.LATEST.major+1.
> 
> I can do those changes on Thursday if you don't feel comfortable doing 
> them.
> 
> 
> 
> Le mar. 27 juin 2017 à 08:12, Anshum Gupta  

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4115 - Unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4115/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([2FD706F167CD83E6:4DBAF8B0A843E3D8]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at 
org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter(MetricsHandlerTest.java:201)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12699 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.MetricsHandlerTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-10910) Clean up a few details left over from pluggable transient core and untangling CoreDescriptor/CoreContainer references

2017-06-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068550#comment-16068550
 ] 

Erick Erickson commented on SOLR-10910:
---

Oh crap, didn't commit before pushing.



> Clean up a few details left over from pluggable transient core and untangling 
> CoreDescriptor/CoreContainer references
> -
>
> Key: SOLR-10910
> URL: https://issues.apache.org/jira/browse/SOLR-10910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10910.patch, SOLR-10910.patch
>
>
> There are a few bits of the code from SOLR-10007, SOLR-8906 that could stand 
> some cleanup. For instance, the TransientSolrCoreCache is rather awkwardly 
> hanging around in CoreContainer and would fit more naturally in SolrCores.
> What I've seen so far shouldn't result in incorrect behavior, just cleaning 
> up for the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10910) Clean up a few details left over from pluggable transient core and untangling CoreDescriptor/CoreContainer references

2017-06-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068549#comment-16068549
 ] 

Erick Erickson commented on SOLR-10910:
---

Hmm, builds for me. All tests run.

Let me dig

> Clean up a few details left over from pluggable transient core and untangling 
> CoreDescriptor/CoreContainer references
> -
>
> Key: SOLR-10910
> URL: https://issues.apache.org/jira/browse/SOLR-10910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10910.patch, SOLR-10910.patch
>
>
> There are a few bits of the code from SOLR-10007, SOLR-8906 that could stand 
> some cleanup. For instance, the TransientSolrCoreCache is rather awkwardly 
> hanging around in CoreContainer and would fit more naturally in SolrCores.
> What I've seen so far shouldn't result in incorrect behavior, just cleaning 
> up for the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10397) Port 'autoAddReplicas' feature to the policy rules framework and make it work with non-shared filesystems

2017-06-29 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068534#comment-16068534
 ] 

Shalin Shekhar Mangar commented on SOLR-10397:
--

Thanks Dat. I looked at the autoscaling branch and reviewed the test mostly. A 
few comments:
# The code which creates the set of coreNodeNames on the lost node isn't 
correct because multiple cores can have the same coreNodeName if they belong to 
different collections. Moreover, there is no guarantee that the node we shut 
down actually had replicas from multiple collections. So we need a better logic 
to assert that no replica belonging to a collection that has 
autoAddReplicas=false is moved on nodeLost event.
# Why remove the implicitly created trigger in 
AutoAddReplicasPlanActionTest.testSimple? I presume it is because you want to 
explicitly create the AutoAddReplicasPlanAction which is fine but in that case, 
a proper end-to-end integration test is also necessary.

I'd appreciate if [~noble.paul] can review the changes to Policy and relevant 
test coverage.

> Port 'autoAddReplicas' feature to the policy rules framework and make it work 
> with non-shared filesystems
> -
>
> Key: SOLR-10397
> URL: https://issues.apache.org/jira/browse/SOLR-10397
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10397.1.patch, SOLR-10397.patch
>
>
> Currently 'autoAddReplicas=true' can be specified in the Collection Create 
> API to automatically add replicas when a replica becomes unavailable. I 
> propose to move this feature to the autoscaling cluster policy rules design.
> This will include the following:
> * Trigger support for ‘nodeLost’ event type
> * Modification of existing implementation of ‘autoAddReplicas’ to 
> automatically create the appropriate ‘nodeLost’ trigger.
> * Any such auto-created trigger must be marked internally such that setting 
> ‘autoAddReplicas=false’ via the Modify Collection API should delete or 
> disable corresponding trigger.
> * Support for non-HDFS filesystems while retaining the optimization afforded 
> by HDFS i.e. the replaced replica can point to the existing data dir of the 
> old replica.
> * Deprecate/remove the feature of enabling/disabling ‘autoAddReplicas’ across 
> the entire cluster using cluster properties in favor of using the 
> suspend-trigger/resume-trigger APIs.
> This will retain backward compatibility for the most part and keep a common 
> use-case easy to enable as well as make it available to more people (i.e. 
> people who don't use HDFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10910) Clean up a few details left over from pluggable transient core and untangling CoreDescriptor/CoreContainer references

2017-06-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068532#comment-16068532
 ] 

Steve Rowe commented on SOLR-10910:
---

Yeah, branch_6x compilation is broken in multiple places.

> Clean up a few details left over from pluggable transient core and untangling 
> CoreDescriptor/CoreContainer references
> -
>
> Key: SOLR-10910
> URL: https://issues.apache.org/jira/browse/SOLR-10910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10910.patch, SOLR-10910.patch
>
>
> There are a few bits of the code from SOLR-10007, SOLR-8906 that could stand 
> some cleanup. For instance, the TransientSolrCoreCache is rather awkwardly 
> hanging around in CoreContainer and would fit more naturally in SolrCores.
> What I've seen so far shouldn't result in incorrect behavior, just cleaning 
> up for the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10123) Analytics Component 2.0

2017-06-29 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068518#comment-16068518
 ] 

Houston Putman commented on SOLR-10123:
---

Okay, so I have updated the cloud and non-cloud schemas to add the randomized 
numeric fields. However the randomized doc-values cannot be used since 
docValues are required for almost all Analytics Component functionality.

Almost all tests pass now, however there is a difference between 
SortedSetDocValues (TrieField) and SortedNumericDocValues (PointField) that 
might make this impossible. SortedSetDocValues only store the unique set of 
values for a multi-valued field, however SortedNumericDocValues can store the 
same value multiple times for a field on the same document. Therefore analytics 
results can vary between the two. 

For an example, if you faceting on {{multi_valued_int_field}} and calculated 
{{sum(float_field)}} on just the following document:
{{Document = ( id="1", multi_valued_int_field=\[1,1,2,2,3\], float_field=3 )}}

If {{multi_valued_int_field}} was a {{IntPointField}}, then the results of the 
facet would be:
{{1 : ( sum(float_field) = 6 ) , 2 : ( sum(float_field) = 6 ) , 3 : ( 
sum(float_field) = 3 )}}

If {{multi_valued_int_field}} was a {{TrieIntField}}, then the results of the 
facet would be:
{{1 : ( sum(float_field) = 3 ) , 2 : ( sum(float_field) = 3 ) , 3 : ( 
sum(float_field) = 3 )}}

This isn't included in the unit tests, but the same thing would occur when a 
multi-valued numeric field was used in an expression. The results could be 
different.

> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+175) - Build # 3853 - Still Failing!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3853/
Java: 32bit/jdk-9-ea+175 -client -XX:+UseParallelGC --illegal-access=deny

All tests passed

Build Log:
[...truncated 10962 lines...]
[javac] Compiling 785 source files to 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/classes/test
[javac] 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:524:
 error: cannot find symbol
[javac] JettySolrRunner jetty = jettys.get(0);
[javac] ^
[javac]   symbol:   class JettySolrRunner
[javac]   location: class BasicDistributedZkTest
[javac] 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:569:
 error: cannot find symbol
[javac]   assertEquals(0, 
CollectionAdminRequest.createCollection(collection, "conf1", numShards, 1)
[javac]   ^
[javac]   symbol:   variable CollectionAdminRequest
[javac]   location: class BasicDistributedZkTest
[javac] 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:584:
 error: cannot find symbol
[javac]   
assertTrue(CollectionAdminRequest.addReplicaToShard(collection, 
"shard"+((freezeI%numShards)+1))
[javac]  ^
[javac]   symbol:   variable CollectionAdminRequest
[javac]   location: class BasicDistributedZkTest
[javac] 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:118:
 error: cannot find symbol
[javac] SolrClient client = clients.get(0);
[javac] ^
[javac]   symbol:   class SolrClient
[javac]   location: class UnloadDistributedZkTest
[javac] 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:170:
 error: cannot find symbol
[javac] SolrClient client = clients.get(0);
[javac] ^
[javac]   symbol:   class SolrClient
[javac]   location: class UnloadDistributedZkTest
[javac] 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:383:
 error: cannot find symbol
[javac] JettySolrRunner jetty = jettys.get(0);
[javac] ^
[javac]   symbol:   class JettySolrRunner
[javac]   location: class UnloadDistributedZkTest
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:810: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:754: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:59: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build.xml:267: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/common-build.xml:549: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/common-build.xml:795: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/common-build.xml:807: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/common-build.xml:1967: 
Compile failed; see the compiler error output for details.

Total time: 27 minutes 8 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-6.x - Build # 980 - Still Failing

2017-06-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/980/

All tests passed

Build Log:
[...truncated 10923 lines...]
[javac] Compiling 785 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/classes/test
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:524:
 error: cannot find symbol
[javac] JettySolrRunner jetty = jettys.get(0);
[javac] ^
[javac]   symbol:   class JettySolrRunner
[javac]   location: class BasicDistributedZkTest
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:569:
 error: cannot find symbol
[javac]   assertEquals(0, 
CollectionAdminRequest.createCollection(collection, "conf1", numShards, 1)
[javac]   ^
[javac]   symbol:   variable CollectionAdminRequest
[javac]   location: class BasicDistributedZkTest
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java:584:
 error: cannot find symbol
[javac]   
assertTrue(CollectionAdminRequest.addReplicaToShard(collection, 
"shard"+((freezeI%numShards)+1))
[javac]  ^
[javac]   symbol:   variable CollectionAdminRequest
[javac]   location: class BasicDistributedZkTest
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:118:
 error: cannot find symbol
[javac] SolrClient client = clients.get(0);
[javac] ^
[javac]   symbol:   class SolrClient
[javac]   location: class UnloadDistributedZkTest
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:170:
 error: cannot find symbol
[javac] SolrClient client = clients.get(0);
[javac] ^
[javac]   symbol:   class SolrClient
[javac]   location: class UnloadDistributedZkTest
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:383:
 error: cannot find symbol
[javac] JettySolrRunner jetty = jettys.get(0);
[javac] ^
[javac]   symbol:   class JettySolrRunner
[javac]   location: class UnloadDistributedZkTest
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:810: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:754: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:59: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build.xml:267: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/common-build.xml:549:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:795:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:807:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:1967:
 Compile failed; see the compiler error output for details.

Total time: 23 minutes 46 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10814) Solr RuleBasedAuthorization config doesn't work seamlessly with kerberos authentication

2017-06-29 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068447#comment-16068447
 ] 

Hrishikesh Gadre commented on SOLR-10814:
-

ping...

[~anshumg] [~noble.paul] Any thoughts?

> Solr RuleBasedAuthorization config doesn't work seamlessly with kerberos 
> authentication
> ---
>
> Key: SOLR-10814
> URL: https://issues.apache.org/jira/browse/SOLR-10814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Hrishikesh Gadre
>
> Solr allows configuring roles to control user access to the system. This is 
> accomplished through rule-based permission definitions which are assigned to 
> users.
> The authorization framework in Solr passes the information about the request 
> (to be authorized) using an instance of AuthorizationContext class. Currently 
> the only way to extract authenticated user is via getUserPrincipal() method 
> which returns an instance of java.security.Principal class. The 
> RuleBasedAuthorizationPlugin implementation invokes getName() method on the 
> Principal instance to fetch the list of associated roles.
> https://github.com/apache/lucene-solr/blob/2271e73e763b17f971731f6f69d6ffe46c40b944/solr/core/src/java/org/apache/solr/security/RuleBasedAuthorizationPlugin.java#L156
> In case of basic authentication mechanism, the principal is the userName. 
> Hence it works fine. But in case of kerberos authentication, the user 
> principal also contains the RELM information e.g. instead of foo, it would 
> return f...@example.com. This means if the user changes the authentication 
> mechanism, he would also need to change the user-role mapping in 
> authorization section to use f...@example.com instead of foo. This is not 
> good from usability perspective.   



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10826) CloudSolrClient using unsplit collection list when expanding aliases

2017-06-29 Thread Tim Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-10826:

Attachment: SOLR-10826.patch

OK I've expanded the test a bit, it now creates a second collection, and alias 
for it, and a combined alias spanning both. Then it tests the various 
combinations of {{collection=...}} values work as expected. Again, these tests 
do fail without the code fix.

> CloudSolrClient using unsplit collection list when expanding aliases
> 
>
> Key: SOLR-10826
> URL: https://issues.apache.org/jira/browse/SOLR-10826
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4, 6.5.1, 6.6
>Reporter: Tim Owen
>Assignee: Varun Thacker
> Attachments: SOLR-10826.patch, SOLR-10826.patch, SOLR-10826.patch
>
>
> Some recent refactoring seems to have introduced a bug in SolrJ's 
> CloudSolrClient, when it's expanding a collection list and resolving aliases, 
> it's using the wrong local variable for the alias lookup. This leads to an 
> exception because the value is not an alias.
> E.g. suppose you made a request with {{=x,y}} where either or both 
> of {{x}} and {{y}} are not real collection names but valid aliases. This will 
> fail, incorrectly, because the lookup is using {{x,y}} as a potential alias 
> name lookup.
> Patch to fix this attached, which was tested locally and fixed the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6807) Make handleSelect=false by default and deprecate StandardRequestHandler

2017-06-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068443#comment-16068443
 ] 

Noble Paul commented on SOLR-6807:
--

I shall review this tomorrow

> Make handleSelect=false by default and deprecate StandardRequestHandler
> ---
>
> Key: SOLR-6807
> URL: https://issues.apache.org/jira/browse/SOLR-6807
> Project: Solr
>  Issue Type: Task
>Affects Versions: 4.10.2
>Reporter: Alexandre Rafalovitch
>Assignee: David Smiley
>Priority: Minor
>  Labels: solrconfig.xml
> Fix For: master (7.0)
>
> Attachments: 
> SOLR_6807__fix__stateVer__check_to_not_depend_on_handleSelect_setting.patch, 
> SOLR_6807_handleSelect_false.patch, SOLR_6807_handleSelect_false.patch, 
> SOLR_6807_handleSelect_false.patch, SOLR_6807_test_files.patch
>
>
> In the solrconfig.xml, we have a long explanation on the legacy 
> ** section. Since we are cleaning up 
> legacy stuff for version 5, is it safe now to flip handleSelect's default to 
> be *false* and therefore remove both the attribute and the whole section 
> explaining it?
> Then, a section in Reference Guide or even a blog post can explain what to do 
> for the old clients that still need it. But it does not seem to be needed 
> anymore for the new users. And possibly cause confusing now that we have 
> implicit, explicit and overlay handlers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10945) Get expression fails to operate on sort expr

2017-06-29 Thread Susheel Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068433#comment-16068433
 ] 

Susheel Kumar commented on SOLR-10945:
--

Hi Joel - I looked the code to debug this issue and here is what i found

The above two expressions (...merge(get(a)... or ...merge(sort(get(a)...) which 
are similar (mathematically/syntactically/functionally) as a whole,  but the 
one which fails where merge is passed get(a) directly, results into error since 
GetStream is passed in the merge init method and GetStream.getStreamSort() 
method returns null (below) while in the other case SortStream is passed  its 
getStreamSort() method returns proper comparator.  

Wondering how can we handle this either by passing StreamComparator to 
GetStream (and how) or do something in merge to not upfront check.  Please 
share your thoughts

  /** Return the stream sort - ie, the order in which records are returned */
  public StreamComparator getStreamSort(){
return null;
  }

MergeStream
--
 private void init(StreamComparator comp, TupleStream ... streams) throws 
IOException {

// All streams must both be sorted so that comp can be derived from
for(TupleStream stream : streams){
  if(!comp.isDerivedFrom(stream.getStreamSort())){
throw new IOException("Invalid MergeStream - all substream comparators 
(sort) must be a superset of this stream's comparator.");
  }
}

> Get expression fails to operate on sort expr
> 
>
> Key: SOLR-10945
> URL: https://issues.apache.org/jira/browse/SOLR-10945
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.6
>Reporter: Susheel Kumar
>Priority: Minor
>
> Get expr fails to operate on a variable which has sort stream and returns 
> "Invalid MergeStream - all substream comparators (sort) must be a superset of 
> this stream's comparator." Exception tuple.
> Below get is given variable a and b which are having sort expr and fails to 
> work
> ==
> let(
> a=sort(select(tuple(id=3,email="C"),id,email),by="id asc,email asc"),
> b=sort(select(tuple(id=2,email="B"),id,email),by="id asc,email asc"),
> c=merge(get(a),get(b),on="id asc,email asc"),
> get(c)
> )
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid MergeStream - all substream comparators (sort) 
> must be a superset of this stream's comparator.",
> "EOF": true
>   }
> ]
>   }
> }
> while below sort outside get works
> ==
> let(
> a=select(tuple(id=3,email="C"),id,email),
> b=select(tuple(id=2,email="B"),id,email),
> c=merge(sort(get(a),by="id asc,email asc"),sort(get(b),by="id asc,email asc"),
> on="id asc,email asc"),
> get(c)
> )
> {
>   "result-set": {
> "docs": [
>   {
> "email": "B",
> "id": "2"
>   },
>   {
> "email": "C",
> "id": "3"
>   },
>   {
> "EOF": true,
> "RESPONSE_TIME": 0
>   }
> ]
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_131) - Build # 1011 - Still Failing!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/1011/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 10948 lines...]
[javac] Compiling 785 source files to 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\classes\test
[javac] 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\BasicDistributedZkTest.java:524:
 error: cannot find symbol
[javac] JettySolrRunner jetty = jettys.get(0);
[javac] ^
[javac]   symbol:   class JettySolrRunner
[javac]   location: class BasicDistributedZkTest
[javac] 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\BasicDistributedZkTest.java:569:
 error: cannot find symbol
[javac]   assertEquals(0, 
CollectionAdminRequest.createCollection(collection, "conf1", numShards, 1)
[javac]   ^
[javac]   symbol:   variable CollectionAdminRequest
[javac]   location: class BasicDistributedZkTest
[javac] 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\BasicDistributedZkTest.java:584:
 error: cannot find symbol
[javac]   
assertTrue(CollectionAdminRequest.addReplicaToShard(collection, 
"shard"+((freezeI%numShards)+1))
[javac]  ^
[javac]   symbol:   variable CollectionAdminRequest
[javac]   location: class BasicDistributedZkTest
[javac] 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\UnloadDistributedZkTest.java:118:
 error: cannot find symbol
[javac] SolrClient client = clients.get(0);
[javac] ^
[javac]   symbol:   class SolrClient
[javac]   location: class UnloadDistributedZkTest
[javac] 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\UnloadDistributedZkTest.java:170:
 error: cannot find symbol
[javac] SolrClient client = clients.get(0);
[javac] ^
[javac]   symbol:   class SolrClient
[javac]   location: class UnloadDistributedZkTest
[javac] 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\UnloadDistributedZkTest.java:383:
 error: cannot find symbol
[javac] JettySolrRunner jetty = jettys.get(0);
[javac] ^
[javac]   symbol:   class JettySolrRunner
[javac]   location: class UnloadDistributedZkTest
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:810: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:754: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:59: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build.xml:267: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\common-build.xml:549: 
The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\common-build.xml:795: 
The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\common-build.xml:807: 
The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\common-build.xml:1967:
 Compile failed; see the compiler error output for details.

Total time: 19 minutes 11 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_131) - Build # 6689 - Still Unstable!

2017-06-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6689/
Java: 32bit/jdk1.8.0_131 -client -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.LargeFieldTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.search.LargeFieldTest_D08018D297FDE389-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.search.LargeFieldTest_D08018D297FDE389-001\init-core-data-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.search.LargeFieldTest_D08018D297FDE389-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.search.LargeFieldTest_D08018D297FDE389-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.search.LargeFieldTest_D08018D297FDE389-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.search.LargeFieldTest_D08018D297FDE389-001\init-core-data-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.search.LargeFieldTest_D08018D297FDE389-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.search.LargeFieldTest_D08018D297FDE389-001

at __randomizedtesting.SeedInfo.seed([D08018D297FDE389]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.handler.V2ApiIntegrationTest.testCollectionsApi

Error Message:
Error from server at http://127.0.0.1:57805/solr: 
java.nio.file.InvalidPathException: Illegal char <�> at index 53: 
C:UsersjenkinsworkspaceLucene-Solr-master-Windowssolr�uildsolr-core estJ0 
empsolr.handler.V2ApiIntegrationTest_D08018D297FDE389-001 empDir-002

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at http://127.0.0.1:57805/solr: 
java.nio.file.InvalidPathException: Illegal char <�> at index 53: 
C:UsersjenkinsworkspaceLucene-Solr-master-Windowssolr�uildsolr-core  estJ0  
 empsolr.handler.V2ApiIntegrationTest_D08018D297FDE389-001   empDir-002
at 
__randomizedtesting.SeedInfo.seed([D08018D297FDE389:C1EFF532FE48737]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException.create(HttpSolrClient.java:804)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:600)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:239)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:470)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:400)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1102)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:843)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:774)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.handler.V2ApiIntegrationTest.testCollectionsApi(V2ApiIntegrationTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 

[jira] [Commented] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-06-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068406#comment-16068406
 ] 

David Smiley commented on SOLR-10272:
-

Wow that's an insightful catch [~steve_rowe]!

> Use a default configset and make the configName parameter optional.
> ---
>
> Key: SOLR-10272
> URL: https://issues.apache.org/jira/browse/SOLR-10272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10272.patch, SOLR-10272.patch.gz, 
> SOLR-10272.patch.gz, SOLR-10272.patch.gz
>
>
> This Jira's motivation is to improve the creating a collection experience 
> better for users.
> To create a collection we need to specify a configName that needs to be 
> present in ZK. When a new user is starting Solr why should he worry about 
> having to know about configsets before he can can create a collection.
> When you create a collection using "bin/solr create" the script uploads a 
> configset and references it. This is great. We should extend this idea to API 
> users as well.
> So here is the rough outline of what I think we can do here:
> 1. When you start solr , the bin script checks to see if 
> "/configs/_baseConfigSet" znode is present . If not it uploads the 
> "basic_configs". 
> We can discuss if its the "basic_configs" or something other default config 
> set. 
> Also we can discuss the name for "/_baseConfigSet". Moving on though
> 2. When a user creates a collection from the API  
> {{admin/collections?action=CREATE=gettingstarted}} here is what we do :
> Use https://cwiki.apache.org/confluence/display/solr/ConfigSets+API to copy 
> over the default config set to a configset with the name of the collection 
> specified.
> collection.configName can truly be an optional parameter. If its specified we 
> don't need to do this step.
> 3. Have the bin scripts use this and remove the logic built in there to do 
> the same thing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7x, and 7.0 branches

2017-06-29 Thread Anshum Gupta
Thanks Adrien, I’d want to try and do this myself as long as you can validate 
the correctness :).

I’ll be working on this in a few hours and should have an update later today 
and hopefully we’d wrap it up soon.

-Anshum



> On Jun 28, 2017, at 10:39 AM, Adrien Grand  wrote:
> 
> If you don't want to do it, I can do it tomorrow but if you'd like to give it 
> a try I'd be happy to help if you need any guidance.
> 
> Le mer. 28 juin 2017 à 19:38, Adrien Grand  > a écrit :
> Hi Anshum,
> 
> This looks like a good start to me. You would also need to remove the 6.x 
> version constants so that TestBackwardCompatibility does not think they are 
> worth testing, as well as all codecs, postings formats and doc values formats 
> that are defined in the lucene/backward-codecs module since they are only 
> about 6.x codecs.
> 
> Le mer. 28 juin 2017 à 09:57, Anshum Gupta  > a écrit :
> Thanks for confirming that Alan, I had similar thoughts but wasn’t sure. 
> 
> I don’t want to change anything that I’m not confident about so I’m just 
> going to create remove those and commit it to my fork. If someone who’s 
> confident agrees with what I’m doing, I’ll go ahead and make those changes to 
> the upstream :).
> 
> -Anshum
> 
> 
> 
>> On Jun 28, 2017, at 12:54 AM, Alan Woodward > > wrote:
>> 
>> We don’t need to support lucene5x codecs in 7, so you should be able to just 
>> remove those tests (and the the relevant packages from backwards-codecs 
>> too), I think?
>> 
>> 
>>> On 28 Jun 2017, at 08:38, Anshum Gupta >> > wrote:
>>> 
>>> I tried to move forward to see this work before automatically computing the 
>>> versions but I have about 30 odd failing test. I’ve made those changes and 
>>> pushed to my local GitHub account in case you have the time to look: 
>>> https://github.com/anshumg/lucene-solr 
>>>  
>>> 
>>> Here’s the build summary if that helps:
>>> 
>>>[junit4] Tests with failures [seed: 31C3B60E557C7E14] (first 10 out of 
>>> 31):
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testOutliers2
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testShortRange
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewValues
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFullLongRange
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testRamBytesUsed
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testFewLargeValues
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testByteRange
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene53.TestLucene53NormsFormat.testLongRange
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene50.TestLucene50SegmentInfoFormat.testRandomExceptions
>>>[junit4]   - 
>>> org.apache.lucene.codecs.lucene62.TestLucene62SegmentInfoFormat.testRandomExceptions
>>>[junit4] 
>>>[junit4] 
>>>[junit4] JVM J0: 0.56 .. 9.47 = 8.91s
>>>[junit4] JVM J1: 0.56 .. 4.13 = 3.57s
>>>[junit4] JVM J2: 0.56 ..47.28 =46.73s
>>>[junit4] JVM J3: 0.56 .. 3.89 = 3.33s
>>>[junit4] Execution time total: 47 seconds
>>>[junit4] Tests summary: 8 suites, 215 tests, 30 errors, 1 failure, 24 
>>> ignored (24 assumptions)
>>> 
>>> 
>>> -Anshum
>>> 
>>> 
>>> 
 On Jun 27, 2017, at 4:15 AM, Adrien Grand > wrote:
 
 The test***BackwardCompatibility cases can be removed since they make sure 
 that Lucene 7 can read Lucene 6 norms, while Lucene 8 doesn't have to be 
 able to read Lucene 6 norms.
 
 TestSegmentInfos needs to be adapted to the new versions, we need to 
 replace 5 with 6 and 8 with 9. Maybe we should compute those numbers 
 automatically based on Version.LATEST.major so that it does not require 
 manual changes when moving to a new major version. That would give 5 -> 
 Version.LATEST.major-2 and 8 -> Version.LATEST.major+1.
 
 I can do those changes on Thursday if you don't feel comfortable doing 
 them.
 
 
 
 Le mar. 27 juin 2017 à 08:12, Anshum Gupta > a écrit :
 Without making any changes at all and just bumping up the version, I hit 
 these errors when running the tests:
 
[junit4]   2> NOTE: reproduce with: ant test  
 -Dtestcase=TestSegmentInfos -Dtests.method=testIllegalCreatedVersion 
 -Dtests.seed=C818A61FA6C293A1 -Dtests.slow=true -Dtests.locale=es-PR 
 -Dtests.timezone=Etc/GMT+4 -Dtests.asserts=true 
 

[jira] [Commented] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-06-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068402#comment-16068402
 ] 

Steve Rowe commented on SOLR-10272:
---

Policeman Jenkins failure 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1408/] - doesn't 
reproduce for me on Linux, but I suspect the Solaris platform is relevant here, 
and that the failing directory comparison is depending on a sort that's not 
stable across platforms, since the directory contents are the same, just in a 
different order:

{noformat}
Checking out Revision c9c0121d9399ff0009c51d6a32632dd0962e8c8f 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestConfigSetsAPI 
-Dtests.method=testUserAndTestDefaultConfigsetsAreSame 
-Dtests.seed=DBE6E9A12E3D770 -Dtests.slow=true -Dtests.locale=zh 
-Dtests.timezone=Africa/Sao_Tome -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 2.02s J1 | 
TestConfigSetsAPI.testUserAndTestDefaultConfigsetsAreSame <<<
   [junit4]> Throwable #1: org.junit.ComparisonFailure: Mismatch in files 
expected:<[[lang, elevate.xml, currency.xml, managed-schema, params.json, 
protwords.txt, stopwords.txt, synonyms.txt, solrconfig.xml]]> but 
was:<[[params.json, solrconfig.xml, lang, currency.xml, stopwords.txt, 
elevate.xml, protwords.txt, managed-schema, synonyms.txt]]>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([DBE6E9A12E3D770:72C37467A4C75D3]:0)
   [junit4]>at 
org.apache.solr.cloud.TestConfigSetsAPI$1.preVisitDirectory(TestConfigSetsAPI.java:747)
   [junit4]>at 
org.apache.solr.cloud.TestConfigSetsAPI$1.preVisitDirectory(TestConfigSetsAPI.java:741)
   [junit4]>at java.nio.file.Files.walkFileTree(Files.java:2677)
   [junit4]>at java.nio.file.Files.walkFileTree(Files.java:2742)
   [junit4]>at 
org.apache.solr.cloud.TestConfigSetsAPI.compareDirectories(TestConfigSetsAPI.java:741)
   [junit4]>at 
org.apache.solr.cloud.TestConfigSetsAPI.testUserAndTestDefaultConfigsetsAreSame(TestConfigSetsAPI.java:732)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J1/temp/solr.cloud.TestConfigSetsAPI_DBE6E9A12E3D770-001
   [junit4]   2> Jun 29, 2017 11:31:15 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 1 leaked 
thread(s).
   [junit4]   2> NOTE: test params are: codec=Lucene70, 
sim=RandomSimilarity(queryNorm=false): {}, locale=zh, timezone=Africa/Sao_Tome
   [junit4]   2> NOTE: SunOS 5.11 amd64/Oracle Corporation 1.8.0_131 
(64-bit)/cpus=3,threads=1,free=176703120,total=518979584
   [junit4]   2> NOTE: All tests run in this JVM: [TestQueryUtils, 
MoveReplicaTest, EchoParamsTest, SparseHLLTest, TestPayloadScoreQParserPlugin, 
TestLegacyFieldCache, SimpleMLTQParserTest, TestLockTree, V2StandaloneTest, 
TestPushWriter, TestDynamicFieldCollectionResource, BasicAuthStandaloneTest, 
TestPerFieldSimilarity, TestFieldCollectionResource, 
OverseerModifyCollectionTest, TestPostingsSolrHighlighter, 
TestMultiValuedNumericRangeQuery, TestUseDocValuesAsStored, TestStressLucene, 
TestInPlaceUpdatesStandalone, SolrGraphiteReporterTest, 
DistributedFacetPivotSmallTest, TestChildDocTransformer, TestFastWriter, 
TestSolrJ, TestDistributedGrouping, TestDynamicLoading, 
DistribDocExpirationUpdateProcessorTest, FieldMutatingUpdateProcessorTest, 
SolrPluginUtilsTest, TestFiltering, TestSizeLimitedDistributedMap, 
SolrCmdDistributorTest, TestSolrConfigHandlerCloud, 
DocumentAnalysisRequestHandlerTest, HdfsTlogReplayBufferedWhileIndexingTest, 
TestCryptoKeys, DirectSolrSpellCheckerTest, TestPolicyCloud, 
LukeRequestHandlerTest, TestReplicaProperties, BasicZkTest, 
SolrCoreCheckLockOnStartupTest, ParsingFieldUpdateProcessorsTest, 
DistributedQueryComponentCustomSortTest, SpellCheckCollatorTest, 
DistributedFacetPivotLongTailTest, SolrIndexConfigTest, 
TlogReplayBufferedWhileIndexingTest, TestRandomCollapseQParserPlugin, 
TestRequestStatusCollectionAPI, CurrencyFieldTypeTest, 
TestExclusionRuleCollectionAccess, HdfsSyncSliceTest, 
HdfsChaosMonkeySafeLeaderTest, ReplicationFactorTest, TestAnalyzedSuggestions, 
TermsComponentTest, TestWordDelimiterFilterFactory, 
ClassificationUpdateProcessorTest, TestManagedSynonymGraphFilterFactory, 
FileBasedSpellCheckerTest, TestSchemaNameResource, 
TestSlowCompositeReaderWrapper, SolrCloudExampleTest, TestInitQParser, 
CachingDirectoryFactoryTest, TestTolerantUpdateProcessorCloud, 
TestCloudRecovery, SolrShardReporterTest, TestRecovery, BlockCacheTest, 
TestInfoStreamLogging, TestConfigSetsAPIExclusivity, TestSolrFieldCacheBean, 
FullSolrCloudDistribCmdsTest, TestValueSourceCache, 
OpenExchangeRatesOrgProviderTest, 

[jira] [Commented] (SOLR-10976) StreamExpressionTest.testParallelTerminatingDaemonUpdateStream() failures: Boolean cannot be cast to Map

2017-06-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068388#comment-16068388
 ] 

Steve Rowe commented on SOLR-10976:
---

Another Policeman failure 
[https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2711/] (from January 22, 
2017, so only the email notification is still accessible):

{noformat}
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelTerminatingDaemonUpdateStream

Error Message:
--> http://127.0.0.1:34900/solr/collection1: An exception has occurred on the 
server, refer to server log for details.

Stack Trace:
java.io.IOException: --> http://127.0.0.1:34900/solr/collection1: An exception 
has occurred on the server, refer to server log for details.
at 
__randomizedtesting.SeedInfo.seed([3E0A16E04BD9A583:CF6ECE881FB36A9D]:0)
at 
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:238)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelTerminatingDaemonUpdateStream(StreamExpressionTest.java:3765)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

  1   2   >