[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 899 - Still Failing

2017-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/899/

No tests ran.

Build Log:
[...truncated 28007 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (16.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 29.8 MB in 0.13 sec (234.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 70.9 MB in 0.29 sec (243.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 81.3 MB in 0.30 sec (272.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6192 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6192 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 220 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (19.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 51.8 MB in 0.81 sec (63.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 146.0 MB in 2.36 sec (61.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 147.0 MB in 2.34 sec (62.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] Creating Solr home directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]  

[jira] [Updated] (LUCENE-8071) GeoExactCircle should create circles with right number of planes

2017-11-30 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8071:
-
Attachment: LUCENE-8071.patch
LUCENE-8071-test.patch

Attached the test and proposed solution.

I basically force the shape to create at least 4 sectors if the radius is big. 
I have realized as well that the shape has always a minimum of two sectors. 
Therefore there is a lot of code that can be removed as it is checking for 
number of sectors lower than 2.



> GeoExactCircle should  create circles with right number of planes
> -
>
> Key: LUCENE-8071
> URL: https://issues.apache.org/jira/browse/LUCENE-8071
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
> Attachments: LUCENE-8071-test.patch, LUCENE-8071.patch
>
>
> Hi [~kwri...@metacarta.com],
> There is still a situation when the test can fail. It happens when the planet 
> model is a SPHERE and the radius is slightly lower than PI. The circle is 
> created with two sectors but the circle plane is too big and the shape is 
> bogus.
> I will attach a test and a proposed solution. (I hope this is the last issue 
> of this saga)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8071) GeoExactCircle should create circles with right number of planes

2017-11-30 Thread Ignacio Vera (JIRA)
Ignacio Vera created LUCENE-8071:


 Summary: GeoExactCircle should  create circles with right number 
of planes
 Key: LUCENE-8071
 URL: https://issues.apache.org/jira/browse/LUCENE-8071
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial3d
Reporter: Ignacio Vera


Hi [~kwri...@metacarta.com],

There is still a situation when the test can fail. It happens when the planet 
model is a SPHERE and the radius is slightly lower than PI. The circle is 
created with two sectors but the circle plane is too big and the shape is bogus.

I will attach a test and a proposed solution. (I hope this is the last issue of 
this saga)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 265 - Failure

2017-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/265/

4 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([6B92EF153FC882F1:E8E4B0E7E9B18C50]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1122)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:647)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:130)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:110)
at 
org.apache.solr.common.cloud.ClusterStateUtil.waitForAllActiveAndLiveReplicas(ClusterStateUtil.java:70)
at 
org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:184)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(Thread

[jira] [Commented] (SOLR-11542) Add URP to route time partitioned collections

2017-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273925#comment-16273925
 ] 

ASF subversion and git services commented on SOLR-11542:


Commit 7deca62501ec7484ea54d292fe0131c78384e95f in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7deca62 ]

SOLR-11542: Rename TimePartitionedUpdateProcessor to 
TimeRoutedAliasUpdateProcessor

(cherry picked from commit 7877f5a)


> Add URP to route time partitioned collections
> -
>
> Key: SOLR-11542
> URL: https://issues.apache.org/jira/browse/SOLR-11542
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 7.2
>
> Attachments: SOLR_11542_time_series_URP.patch, 
> SOLR_11542_time_series_URP.patch, SOLR_11542_time_series_URP.patch
>
>
> Assuming we have some time partitioning metadata on an alias (see SOLR-11487 
> for the metadata facility), we'll then need to route documents to the right 
> collection.  I propose a new URP.  _(edit: originally it was thought 
> DistributedURP would be modified but thankfully we can avoid that)._
> The scope of this issue is:
> * decide on some alias metadata names & semantics
> * decide the collection suffix pattern.  Read/write code (needed to route).
> * the routing code
> No new partition creation nor deletion happens is this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11542) Add URP to route time partitioned collections

2017-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273922#comment-16273922
 ] 

ASF subversion and git services commented on SOLR-11542:


Commit 7877f5a511a60e44f2dabd45ac1d6f84626b1161 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7877f5a ]

SOLR-11542: Rename TimePartitionedUpdateProcessor to 
TimeRoutedAliasUpdateProcessor


> Add URP to route time partitioned collections
> -
>
> Key: SOLR-11542
> URL: https://issues.apache.org/jira/browse/SOLR-11542
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 7.2
>
> Attachments: SOLR_11542_time_series_URP.patch, 
> SOLR_11542_time_series_URP.patch, SOLR_11542_time_series_URP.patch
>
>
> Assuming we have some time partitioning metadata on an alias (see SOLR-11487 
> for the metadata facility), we'll then need to route documents to the right 
> collection.  I propose a new URP.  _(edit: originally it was thought 
> DistributedURP would be modified but thankfully we can avoid that)._
> The scope of this issue is:
> * decide on some alias metadata names & semantics
> * decide the collection suffix pattern.  Read/write code (needed to route).
> * the routing code
> No new partition creation nor deletion happens is this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273917#comment-16273917
 ] 

David Smiley commented on SOLR-11508:
-

bq. SOLR_CORE_HOME

+1

RE defaults: We should try to be consistent with other settings, like for 
SOLR_HOME.  The env var is examined in bin/solr to become a system property.  
We can do the same for SOLR_CORE_HOME becoming something like solr.core.home.  
I think if someone hard-codes a value in solr.xml then that is what's used.  If 
they don't, then read the system property.  This approach is also then 
consistent with how someone can set their data dir -- by hard-coding or failing 
that then using a system property or failing that then the default.

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Marc Morissette (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273899#comment-16273899
 ] 

Marc Morissette commented on SOLR-11508:


[~dsmiley] I was thinking the same thing. 

What should the environment variable be called? 
* SOLR_CORE_HOME fits well with SOLR_HOME and SOLR_DATA_HOME
* SOLR_CORE_ROOT_DIRECTORY is most similar to coreRootDirectory.

I think I like SOLR_CORE_HOME a little bit better.

What should the behaviour be if coreRootDirectory is already defined in 
solr.xml? Should the environment variable override solr.xml or vice-versa? I 
guess environment variables/command line parameters usually override 
configuration files?

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273864#comment-16273864
 ] 

David Smiley commented on SOLR-11508:
-

[~erickerickson] and [~elyograg] I see where you are coming from.  Perhaps Marc 
and I have misjudged the solution to this annoyance of working with 
Solr/Docker.  What if we can make coreRootDirectory easier to set, particularly 
for docker users -- e.g. a SOLR_CORE_ROOT_DIRECTORY env var or something more 
concise.  That would be a very simple and I bet non-controversial issue to take 
up.  What do you think?  The key thing a Solr/Docker user (like me) wants is a 
directory where the cores live (core.properties), data for each core, and that 
which need not contain solr.xml.  That's coreRootDirectory? The confs are 
either in ZK with SolrCloud or if classic Solr then the configSet mechanism 
allows them to be some place other than a coreRootDir (I think).

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11692) SolrDispatchFilter.closeShield passes the shielded response object back to jetty making the stream unclose able

2017-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273860#comment-16273860
 ] 

David Smiley commented on SOLR-11692:
-

[~millerjeff0] can you attach a patch to the issue instead?  inlining diffs is 
problematic due to escaping.  Also, unlike some diffs which need no context, 
for this one it's necessary to apply it to notice that your intent was to move 
the closing until after code that calls {{chain.doFilter}}.  I looked at 
SolrDispatchFilter and do note {{chain.doFilter}} is actually in two places 
(not one); the second is beyond where you moved it to.  So I think your patch 
here only addresses the issue for some cases but not others.  Any way, the fix 
should be easy.

> SolrDispatchFilter.closeShield passes the shielded response object back to 
> jetty making the stream unclose able
> ---
>
> Key: SOLR-11692
> URL: https://issues.apache.org/jira/browse/SOLR-11692
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.1
> Environment: Linux/Mac tested
>Reporter: Jeff Miller
>Priority: Minor
>  Labels: dispatchlayer, jetty, newbie, streams
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> In test mode we trigger closeShield code in SolrDispatchFilter, however there 
> are code paths where we passthrough the objects to the DefaultHandler which 
> can no longer close the response.
> Example stack trace:
> java.lang.AssertionError: Attempted close of response output stream.
> at 
> org.apache.solr.servlet.SolrDispatchFilter$2$1.close(SolrDispatchFilter.java:528)
> at org.eclipse.jetty.server.Dispatcher.commitResponse(Dispatcher.java:315)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:279)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
> at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:566)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:385)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> searchserver.filter.SfdcDispatchFilter.doFilter(SfdcDispatchFilter.java:204)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:370)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThre

[JENKINS] Solr-reference-guide-master - Build # 3624 - Failure

2017-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/3624/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H19 (git-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 9c0ca9b46505b21ac7e3165d8f9f3c0ce3fe63ec 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9c0ca9b46505b21ac7e3165d8f9f3c0ce3fe63ec
Commit message: "Merge branch 'master' of 
https://git-wip-us.apache.org/repos/asf/lucene-solr";
 > git rev-list 01d12777c4bcab7ae8085d5ed5e1b20a0e1a5526 # timeout=10
java.util.concurrent.TimeoutException: Timeout waiting for task.
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:259)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:91)
at 
com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:69)
at 
com.atlassian.jira.rest.client.internal.async.DelegatingPromise.get(DelegatingPromise.java:107)
at 
hudson.plugins.jira.JiraRestService.getIssuesFromJqlSearch(JiraRestService.java:177)
at 
hudson.plugins.jira.JiraSession.getIssuesFromJqlSearch(JiraSession.java:135)
at 
io.jenkins.blueocean.service.embedded.jira.JiraSCMListener.onChangeLogParsed(JiraSCMListener.java:43)
at 
hudson.model.listeners.SCMListener.onChangeLogParsed(SCMListener.java:120)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:582)
Caused: java.io.IOException: Failed to parse changelog
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:584)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:491)
at hudson.model.Run.execute(Run.java:1737)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:419)
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 9c0ca9b46505b21ac7e3165d8f9f3c0ce3fe63ec 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9c0ca9b46505b21ac7e3165d8f9f3c0ce3fe63ec
Commit message: "Merge branch 'master' of 
https://git-wip-us.apache.org/repos/asf/lucene-solr";
java.util.concurrent.TimeoutException: Timeout waiting for task.
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:259)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:91)
at 
com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:69)
at 
com.atlassian.jira.rest.client.internal.async.DelegatingPromise.get(DelegatingPromise.java:107)
at 
hudson.plugins.jira.JiraRestService.getIssuesFromJqlSearch(JiraRestService.java:177)
at 
hudson.plugins.jira.JiraSession.getIssuesFromJqlSearch(JiraSession.java:135)
at 
io.jenkins.blueocean.service.embedded.jira.JiraSCMListener.onChangeLogParsed(JiraSCMListener.java:43)
at 
hudson.model.listeners.SCMListener.onChangeLogParsed(SCMListener.java:120)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:582)
Caused: java.io.IOException: Failed to parse changelog
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:584)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStr

[jira] [Created] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down

2017-11-30 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-11712:


 Summary: Streaming throws IndexOutOfBoundsException against an 
alias when a shard is down
 Key: SOLR-11712
 URL: https://issues.apache.org/jira/browse/SOLR-11712
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


I have an alias against multiple collections. If any one of the shards the 
underlying collection is down then the stream handler throws an 
IndexOutOfBoundsException

{code}
{"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: 
Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}}
{code}

>From the Solr logs:
{code}
2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 
r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414)
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305)
at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51)
at 
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535)
at 
org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83)
at 
org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193)
at 
org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
at 
org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
at 
org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thre

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1430 - Still unstable

2017-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1430/

20 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test

Error Message:
The Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([1A5FD357C74335A5:920BEC8D69BF585D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.ra

[jira] [Resolved] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-8070.
-
   Resolution: Fixed
Fix Version/s: 7.2
   master (8.0)
   6.7

> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Fix For: 6.7, master (8.0), 7.2
>
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch, 
> LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273820#comment-16273820
 ] 

ASF subversion and git services commented on LUCENE-8070:
-

Commit 608e094c1ec7a31b9f850bad1e3d27640506ca4a in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=608e094 ]

LUCENE-8070: Put in a check that prevents a bogus exact circle from being 
created.


> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch, 
> LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273818#comment-16273818
 ] 

ASF subversion and git services commented on LUCENE-8070:
-

Commit 8a385a07e4bfb5c6600f6cf45052785351ff790d in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8a385a0 ]

LUCENE-8070: Put in a check that prevents a bogus exact circle from being 
created.


> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch, 
> LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273816#comment-16273816
 ] 

ASF subversion and git services commented on LUCENE-8070:
-

Commit 249dac1a5dc7b0737ac9b43c8ab86c20e632e36c in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=249dac1 ]

LUCENE-8070: Put in a check that prevents a bogus exact circle from being 
created.


> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch, 
> LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8011) Improve similarity explanations

2017-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273779#comment-16273779
 ] 

ASF GitHub Bot commented on LUCENE-8011:


GitHub user mayya-sharipova opened a pull request:

https://github.com/apache/lucene-solr/pull/280

LUCENE-8011: Improve similarity explanations



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mayya-sharipova/lucene-solr 
LUCENE-8011-improve-similarity-explanations

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/280.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #280


commit c389c4992b66b5ae750ba7aa5b37937ebedc6615
Author: Mayya Sharipova 
Date:   2017-12-01T01:03:39Z

LUCENE-8011: Improve similarity explanations




> Improve similarity explanations
> ---
>
> Key: LUCENE-8011
> URL: https://issues.apache.org/jira/browse/LUCENE-8011
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
>  Labels: newdev
>
> LUCENE-7997 improves BM25 and Classic explains to better explain:
> {noformat}
> product of:
>   2.2 = scaling factor, k1 + 1
>   9.388654 = idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
> 1.0 = n, number of documents containing term
> 17927.0 = N, total number of documents with field
>   0.9987758 = tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) 
> from:
> 979.0 = freq, occurrences of term within document
> 1.2 = k1, term saturation parameter
> 0.75 = b, length normalization parameter
> 1.0 = dl, length of field
> 1.0 = avgdl, average length of field
> {noformat}
> Previously it was pretty cryptic and used confusing terminology like 
> docCount/docFreq without explanation: 
> {noformat}
> product of:
>   0.016547536 = idf, computed as log(1 + (docCount - docFreq + 0.5) / 
> (docFreq + 0.5)) from:
> 449.0 = docFreq
> 456.0 = docCount
>   2.1920826 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b 
> * fieldLength / avgFieldLength)) from:
> 113659.0 = freq=113658
> 1.2 = parameter k1
> 0.75 = parameter b
> 2300.5593 = avgFieldLength
> 1048600.0 = fieldLength
> {noformat}
> We should fix other similarities too in the same way, they should be more 
> practical.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #280: LUCENE-8011: Improve similarity explanations

2017-11-30 Thread mayya-sharipova
GitHub user mayya-sharipova opened a pull request:

https://github.com/apache/lucene-solr/pull/280

LUCENE-8011: Improve similarity explanations



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mayya-sharipova/lucene-solr 
LUCENE-8011-improve-similarity-explanations

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/280.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #280


commit c389c4992b66b5ae750ba7aa5b37937ebedc6615
Author: Mayya Sharipova 
Date:   2017-12-01T01:03:39Z

LUCENE-8011: Improve similarity explanations




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2200 - Failure

2017-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2200/

3 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testLastPublishedStateIsActive

Error Message:
KeeperErrorCode = Session expired for /clusterstate.json

Stack Trace:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /clusterstate.json
at 
__randomizedtesting.SeedInfo.seed([5B86DDA0222808BC:B3E4E33185250AE0]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1212)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:332)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:332)
at 
org.apache.solr.common.cloud.ZkStateReader.refreshLegacyClusterState(ZkStateReader.java:541)
at 
org.apache.solr.common.cloud.ZkStateReader.forceUpdateCollection(ZkStateReader.java:351)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.updateMappingsFromZk(AbstractFullDistribZkTestBase.java:673)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.updateMappingsFromZk(AbstractFullDistribZkTestBase.java:668)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:333)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIg

[jira] [Commented] (LUCENE-8043) Attempting to add documents past limit can corrupt index

2017-11-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273711#comment-16273711
 ] 

Michael McCandless commented on LUCENE-8043:


Wow, what an evil test :)  +1 to the patch; thanks @simonw and 
[~ysee...@gmail.com]!

> Attempting to add documents past limit can corrupt index
> 
>
> Key: LUCENE-8043
> URL: https://issues.apache.org/jira/browse/LUCENE-8043
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10, 7.0, master (8.0)
>Reporter: Yonik Seeley
>Assignee: Simon Willnauer
> Fix For: master (8.0), 7.2, 7.1.1
>
> Attachments: LUCENE-8043.patch, LUCENE-8043.patch, 
> YCS_IndexTest7a.java
>
>
> The IndexWriter check for too many documents does not always work, resulting 
> in going over the limit.  Once this happens, Lucene refuses to open the index 
> and throws a CorruptIndexException: Too many documents.
> This appears to affect all versions of Lucene/Solr (the check was first 
> implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in 
> 4.10) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11692) SolrDispatchFilter.closeShield passes the shielded response object back to jetty making the stream unclose able

2017-11-30 Thread Jeff Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273647#comment-16273647
 ] 

Jeff Miller commented on SOLR-11692:


[~markrmil...@gmail.com] Can you comment on this patch? The idea being we wrap 
the closeshield for the request/response only in the context of 
SolrDispatchFilter and if we have to pass it up to chain or forward it we pass 
the original 

diff --git a/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java 
b/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java
index fa7eb56..dd27820 100644
--- a/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java
+++ b/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java
@@ -352,8 +352,7 @@ public class SolrDispatchFilter extends BaseSolrFilter {
 request = wrappedRequest.get();
   }
 
-  request = closeShield(request, retry);
-  response = closeShield(response, retry);
+
   
   if (cores.getAuthenticationPlugin() != null) {
 log.debug("User principal: {}", ((HttpServletRequest) 
request).getUserPrincipal());
@@ -376,7 +375,9 @@ public class SolrDispatchFilter extends BaseSolrFilter {
 }
   }
 
-  HttpSolrCall call = getHttpSolrCall((HttpServletRequest) request, 
(HttpServletResponse) response, retry);
+  ServletRequest shieldedRequest = closeShield(request, retry);
+  ServletResponse shieldedResponse = closeShield(response, retry);
+  HttpSolrCall call = getHttpSolrCall((HttpServletRequest) 
shieldedRequest, (HttpServletResponse) shieldedResponse, retry);
   ExecutorUtil.setServerThreadFlag(Boolean.TRUE);
   try {
 Action result = call.call();

> SolrDispatchFilter.closeShield passes the shielded response object back to 
> jetty making the stream unclose able
> ---
>
> Key: SOLR-11692
> URL: https://issues.apache.org/jira/browse/SOLR-11692
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.1
> Environment: Linux/Mac tested
>Reporter: Jeff Miller
>Priority: Minor
>  Labels: dispatchlayer, jetty, newbie, streams
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> In test mode we trigger closeShield code in SolrDispatchFilter, however there 
> are code paths where we passthrough the objects to the DefaultHandler which 
> can no longer close the response.
> Example stack trace:
> java.lang.AssertionError: Attempted close of response output stream.
> at 
> org.apache.solr.servlet.SolrDispatchFilter$2$1.close(SolrDispatchFilter.java:528)
> at org.eclipse.jetty.server.Dispatcher.commitResponse(Dispatcher.java:315)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:279)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
> at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:566)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:385)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> searchserver.filter.SfdcDispatchFilter.doFilter(SfdcDispatchFilter.java:204)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.hand

[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273627#comment-16273627
 ] 

Shawn Heisey commented on SOLR-11508:
-

I think that the code for solr.solr.home, solr.data.home, and coreRootDirectory 
are working according to design intent, and that the default config files like 
solr.xml and the include script also reflect that design intent.  It is the 
documentation (including the reference guide and the script's help text) that 
is lacking.  We should update the documentation rather than change's Solr's 
default behavior or the stock solr.xml.


> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Marc Morissette (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273577#comment-16273577
 ] 

Marc Morissette edited comment on SOLR-11508 at 11/30/17 10:47 PM:
---

As to Erick's question, I believe:

* solr.solr.home contains the server-wide config i.e. solr.xml and the 
configsets.
* coreRootDirectory is where core discovery happens. It contains the 
core.properties files and conf directories. Defaults to solr.solr.home.
* solr.data.home is where core data is stored. It's a directory structure that 
is completely parallel to the one that contains the core.properties (see Core 
Discovery documentation). Defaults to coreRootDirectory.

The issue here is that the doc says:

{quote}  -t   Sets the solr.data.home system property, where Solr will 
store data (index).
  If not set, Solr uses solr.solr.home for config and 
data.{quote}
 
The doc suggests that the core config will be stored in the directory indicated 
by -t. It's currently not the case but I think it should be.

coreRootDirectory has been there for a long time because it makes sense for 
people to want to store their cores away from their server configuration (1). 
solr.data.home addresses what I think might be a less popular requirement: to 
store core config away from core data (2).

The problem is that since 7.0, the command line options and defaults now make 
it quite easy to think you're addressing need (1) when you're in reality 
configuring for need (2).


was (Author: marc.morissette):
As to Erick's question, I believe:

* solr.solr.home contains the server config i.e. solr.xml and the configsets
* coreRootDirectory is where core discovery happens. It contains the 
core.properties files and conf directories. Defaults to solr.solr.home.
* solr.data.home is where the core data is stored. It's a directory structure 
that is parallel to the one that contains the core.properties. Defaults to 
coreRootDirectory.

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273602#comment-16273602
 ] 

Shawn Heisey commented on SOLR-11508:
-

I can see where you're coming from, but I believe that we shouldn't make this 
change.  Consider this an official -1 vote.  I don't often use that.

The change is very much a radical shift in how Solr arranges data on the disk, 
and is NOT what the solr.data.home feature was designed to do.  I suspect that 
the number of users who upgrade without ever reading release notes is high 
enough that there would be a lot of reports of "I just upgraded Solr and now my 
cores aren't loading".

For new users, I am sticking with my assertion that core.properties is not part 
of the core data.  Note that if you change coreRootDirectory to the data home 
in solr.xml, that effectively relocates the entirety of all cores to the data 
home ... which is unnecessary, because just setting the solr home is going to 
do the same thing -- coreRootDirectory defaults to the solr home.

I would rather update the documentation to state that solr.data.home only 
affects dataDir, and that setting the solr home is often the more appropriate 
choice, because it affects the entire core.

It doesn't look like SOLR_DATA_HOME is mentioned in the stock include script, 
so unless we add that, we won't need any documentation comments there.

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11711) Improve memory usage of pivot facets

2017-11-30 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-11711:
--
Description: 
Currently while sending pivot facet requests to each shard, the 
{{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with a 
specified limit > 0. However with a mincount of 0, the pivot facet will use 
exponentially more wasted memory for every pivot field added. This is because 
there will be a total of {{limit^(# of pivots)}} pivot values created in 
memory, even though the vast majority of them will have counts of 0, and are 
therefore useless.

Imagine the scenario of a pivot facet with 3 levels, and {{facet.limit=1000}}. 
There will be a billion pivot values created, and there will almost definitely 
be nowhere near a billion pivot values with counts > 0.

This likely due to the reasoning mentioned in [this comment in the original 
distributed pivot facet 
ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
 Basically it was thought that the refinement code would need to know that a 
count was 0 for a shard so that a refinement request wasn't sent to that shard. 
However this is checked in the code, [in this part of the refinement candidate 
checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
 Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
would either:
* Not be known, because the {{facet.limit}} was smaller than the number of 
facet values with positive counts. This isn't an issue, because they wouldn't 
have been returned with {{pivot.mincount}} set to 0.
* Would be known, because the {{facet.limit}} would be larger than the number 
of facet values returned. therefore this conditional would return false (since 
we are only talking about pivot facets sorted by count).

The solution, is to use the same pivot mincount as would be used if no limit 
was specified. 

This also relates to a similar problem in field faceting that was "fixed" in 
[SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The solution 
was to add a flag, {{facet.distrib.mco}}, which would enable not choosing a 
mincount of 0 when unnessesary. Since this flag can only increase performance, 
and doesn't break any queries I have removed it as an option and replaced the 
code to use the feature always.

  was:
Currently while sending pivot facet requests to each shard, the 
{{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with a 
specified limit > 0. However with a mincount of 0, the pivot facet will use 
exponentially more wasted memory for every pivot field added. This is because 
there will be a total of {{limit^(# of pivots)}} pivot values created in 
memory, even though the vast majority of them will have counts of 0, and are 
therefore useless.

Imagine the scenario of a pivot facet with 3 levels, and `facet.limit=1000`. 
There will be a billion pivot values created, and there will almost definitely 
be nowhere near a billion pivot values with counts > 0.

This likely due to the reasoning mentioned in [this comment in the original 
distributed pivot facet 
ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
 Basically it was thought that the refinement code would need to know that a 
count was 0 for a shard so that a refinement request wasn't sent to that shard. 
However this is checked in the code, [in this part of the refinement candidate 
checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
 Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
would either:
* Not be known, because the {{facet.limit}} was smaller than the number of 
facet values with positive counts. This isn't an issue, because they wouldn't 
have been returned with {{pivot.mincount}} set to 0.
* Would be known, because the {{facet.limit}} would be larger than the number 
of facet values returned. therefore this conditional would return false (since 
we are only talking about pivot facets sorted by count).

The solution, is to use the same pivot mincount as would be used if no limit 
was specified. 

This also relates to a similar problem in field faceting that was "fixed" in 
[SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The solution 
was to add a flag, {{facet.distrib.mco}}, which would enable not choosing a 
mincount of 0 when unnessesary. Since this flag can only increase performance, 
and doesn't break any queries I have removed it as an option and replaced the 
code to use the feature always.


> Improve memory usage of pivot facets
> 
>
> Key: SOLR-11711
> URL: https://issues

[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Marc Morissette (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273577#comment-16273577
 ] 

Marc Morissette commented on SOLR-11508:


As to Erick's question, I believe:

* solr.solr.home contains the server config i.e. solr.xml and the configsets
* coreRootDirectory is where core discovery happens. It contains the 
core.properties files and conf directories. Defaults to solr.solr.home.
* solr.data.home is where the core data is stored. It's a directory structure 
that is parallel to the one that contains the core.properties. Defaults to 
coreRootDirectory.

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Marc Morissette (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273556#comment-16273556
 ] 

Marc Morissette commented on SOLR-11508:


I think there might be a way to minimize problems with existing Solr 
installations.

Instead of changing coreRootDirectory's default behaviour, the vanilla solr.xml 
could be modified to contain 
$\{solr.data.home:}

Users with existing installations that have used the service installation 
scripts would typically remain on the old solr.xml. I'd venture that the subset 
of users who define SOLR_DATA_HOME and use the default SOLR_HOME and default 
solr.xml is probably quite small.

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8043) Attempting to add documents past limit can corrupt index

2017-11-30 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8043:

Fix Version/s: 7.1.1
   7.2
   master (8.0)

> Attempting to add documents past limit can corrupt index
> 
>
> Key: LUCENE-8043
> URL: https://issues.apache.org/jira/browse/LUCENE-8043
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10, 7.0, master (8.0)
>Reporter: Yonik Seeley
>Assignee: Simon Willnauer
> Fix For: master (8.0), 7.2, 7.1.1
>
> Attachments: LUCENE-8043.patch, LUCENE-8043.patch, 
> YCS_IndexTest7a.java
>
>
> The IndexWriter check for too many documents does not always work, resulting 
> in going over the limit.  Once this happens, Lucene refuses to open the index 
> and throws a CorruptIndexException: Too many documents.
> This appears to affect all versions of Lucene/Solr (the check was first 
> implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in 
> 4.10) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8043) Attempting to add documents past limit can corrupt index

2017-11-30 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8043:

Attachment: LUCENE-8043.patch

folks, I have a test that reliably reproduces the issue every time and very 
quickly. It also trips an assertion in the IW that I had to change since I 
think it's not guaranteed especially with the setup I am running in the test. 
[~mikemccand] can you take a look.

> Attempting to add documents past limit can corrupt index
> 
>
> Key: LUCENE-8043
> URL: https://issues.apache.org/jira/browse/LUCENE-8043
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10, 7.0, master (8.0)
>Reporter: Yonik Seeley
>Assignee: Simon Willnauer
> Attachments: LUCENE-8043.patch, LUCENE-8043.patch, 
> YCS_IndexTest7a.java
>
>
> The IndexWriter check for too many documents does not always work, resulting 
> in going over the limit.  Once this happens, Lucene refuses to open the index 
> and throws a CorruptIndexException: Too many documents.
> This appears to affect all versions of Lucene/Solr (the check was first 
> implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in 
> 4.10) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Marc Morissette (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273530#comment-16273530
 ] 

Marc Morissette commented on SOLR-11508:


I've created this bug because a lot of documentation (including the 
command-line help) indicates that SOLR_DATA_HOME is how you store your data 
outside the installation. It's true but quite misleading because a lot of what 
is needed to load that data remains in coreRootDirectory.

Core.properties and the conf directory is not just config but metadata. If you 
delete a core's directory, you would expect the metadata to follow. If you 
download a new version of Solr and point it to your solr.data.home, you would 
expect Solr to be able to load your cores without a sweat. Cores are databases 
and their individual configuration should lie with them, not with the server 
(except for configsets).

Now, I understand why this makes less sense to Solr veterans who have known 
Solr for a long time but please understand how inintuitive this feels to 
SolrCloud and less experimented users. 

My patch does not add or remove any feature. You can still configure different 
values for SOLR_DATA_HOME and coreRootDirectory. I've simply changed the 
defaults to something I consider more intuitive (God knows Solr could use a 
little more of that). 

Yes, changing the default could break some installations (those that have 
defined SOLR_DATA_HOME but not coreRootDirectory) but that is why I've added 
the release note. I feel this is acceptable as long as it makes Solr easier to 
use. Believe me, I'm not the first one to be tripped up by this issue.


> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11711) Improve memory usage of pivot facets

2017-11-30 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-11711:
--
Issue Type: Bug  (was: Improvement)

> Improve memory usage of pivot facets
> 
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and `facet.limit=1000`. 
> There will be a billion pivot values created, and there will almost 
> definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11711) Improve memory usage of pivot facets

2017-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273492#comment-16273492
 ] 

ASF GitHub Bot commented on SOLR-11711:
---

GitHub user HoustonPutman opened a pull request:

https://github.com/apache/lucene-solr/pull/279

SOLR-11711: Improved memory usage for distributed field and pivot facets.

Removed the FACET_DISTRIB_MCO option, since the behavior is now built in.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HoustonPutman/lucene-solr 
pivot_facet_memory_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/279.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #279


commit 8b7ef286100730e26a9bdc8875fce31a5b47b59a
Author: Houston Putman 
Date:   2017-11-30T21:10:50Z

Removed FACET_DISTRIB_MCO option, improved memory usage for distributed 
field and pivot facets.




> Improve memory usage of pivot facets
> 
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and `facet.limit=1000`. 
> There will be a billion pivot values created, and there will almost 
> definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11711) Improve memory usage of pivot facets

2017-11-30 Thread Houston Putman (JIRA)
Houston Putman created SOLR-11711:
-

 Summary: Improve memory usage of pivot facets
 Key: SOLR-11711
 URL: https://issues.apache.org/jira/browse/SOLR-11711
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: faceting
Affects Versions: master (8.0)
Reporter: Houston Putman
 Fix For: 5.6, 6.7, 7.2


Currently while sending pivot facet requests to each shard, the 
{{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with a 
specified limit > 0. However with a mincount of 0, the pivot facet will use 
exponentially more wasted memory for every pivot field added. This is because 
there will be a total of {{limit^(# of pivots)}} pivot values created in 
memory, even though the vast majority of them will have counts of 0, and are 
therefore useless.

Imagine the scenario of a pivot facet with 3 levels, and `facet.limit=1000`. 
There will be a billion pivot values created, and there will almost definitely 
be nowhere near a billion pivot values with counts > 0.

This likely due to the reasoning mentioned in [this comment in the original 
distributed pivot facet 
ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
 Basically it was thought that the refinement code would need to know that a 
count was 0 for a shard so that a refinement request wasn't sent to that shard. 
However this is checked in the code, [in this part of the refinement candidate 
checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
 Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
would either:
* Not be known, because the {{facet.limit}} was smaller than the number of 
facet values with positive counts. This isn't an issue, because they wouldn't 
have been returned with {{pivot.mincount}} set to 0.
* Would be known, because the {{facet.limit}} would be larger than the number 
of facet values returned. therefore this conditional would return false (since 
we are only talking about pivot facets sorted by count).

The solution, is to use the same pivot mincount as would be used if no limit 
was specified. 

This also relates to a similar problem in field faceting that was "fixed" in 
[SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The solution 
was to add a flag, {{facet.distrib.mco}}, which would enable not choosing a 
mincount of 0 when unnessesary. Since this flag can only increase performance, 
and doesn't break any queries I have removed it as an option and replaced the 
code to use the feature always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #279: SOLR-11711: Improved memory usage for distrib...

2017-11-30 Thread HoustonPutman
GitHub user HoustonPutman opened a pull request:

https://github.com/apache/lucene-solr/pull/279

SOLR-11711: Improved memory usage for distributed field and pivot facets.

Removed the FACET_DISTRIB_MCO option, since the behavior is now built in.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HoustonPutman/lucene-solr 
pivot_facet_memory_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/279.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #279


commit 8b7ef286100730e26a9bdc8875fce31a5b47b59a
Author: Houston Putman 
Date:   2017-11-30T21:10:50Z

Removed FACET_DISTRIB_MCO option, improved memory usage for distributed 
field and pivot facets.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273380#comment-16273380
 ] 

Shawn Heisey edited comment on SOLR-11508 at 11/30/17 9:25 PM:
---

bq. Wouldn't this possibly break existing Solr installations?

I hadn't thought of that, but I think you may be right.  If somebody has 
defined solr.data.home in a 7.1 install, then they upgrade Solr to a new 
version with this patch applied, I am pretty sure that the new version of Solr 
will not load any of the existing cores on its first startup, because there 
will not be any core.properties files for it to find.

This sounds like a really bad idea.

[~marc.morissette], if you're specifying a different directory for data than 
you are for config, why do you want to have the core.properties file in the 
data directory?  Do you also want the conf directory to be moved?  if not, then 
this makes no sense at all to me.  If you DO want the conf directory to move, 
then you simply need to set solr.solr.home and NOT solr.data.home, and 
everything moves.

In my opinion, whoever set up the docker image for Solr did it wrong.  They 
should have used our service installer script, which would have put the program 
into /opt/solr and everything else in /var/solr, with solr.solr.home set to 
/var/solr/data.  Instead, the docker image has the solr home in 
/opt/solr/server/solr ... similar to what happens when somebody manually starts 
solr instead of starting an installed service.



was (Author: elyograg):
bq. Wouldn't this possibly break existing Solr installations?

I hadn't thought of that, but I think you may be right.  If somebody has 
defined solr.data.home in a 7.1 install, then they upgrade Solr to a new 
version with this patch applied, I am pretty sure that the new version of Solr 
will not load any of the existing cores on its first startup, because there 
will not be any core.properties files for it to find.

This sounds like a really bad idea.

If you're specifying a different directory for data than you are for config, 
why do you want to have the core.properties file in the data directory?  Do you 
also want the conf directory to be moved?  if not, then this makes no sense at 
all to me.  If you DO want the conf directory to move, then you simply need to 
set solr.solr.home and NOT solr.data.home, and everything moves.

In my opinion, whoever set up the docker image for Solr did it wrong.  They 
should have used our service installer script, which would have put the program 
into /opt/solr and everything else in /var/solr, with solr.solr.home set to 
/var/solr/data.  Instead, the docker image has the solr home in 
/opt/solr/server/solr ... similar to what happens when somebody manually starts 
solr instead of starting an installed service.


> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11678) SSL not working if store and key passwords are different

2017-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273418#comment-16273418
 ] 

Shawn Heisey commented on SOLR-11678:
-

Related to the note from the jetty list about a keymanager password, I checked 
how Solr configures Jetty for SSL, and there is no way provided to set that 
password.

Some info I found says that some people think the keymanager password is not 
the way things should be done

https://stackoverflow.com/a/40941126
https://stackoverflow.com/a/10848925


> SSL not working if store and key passwords are different
> 
>
> Key: SOLR-11678
> URL: https://issues.apache.org/jira/browse/SOLR-11678
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 6.6.2
>Reporter: Constantin Bugneac
>
> If I specify different passwords for store and key then Solr fails to read 
> certificate from JKS file with the below error.
> Example:
> SOLR_SSL_KEY_STORE_PASSWORD: "secret1"
> SOLR_SSL_TRUST_STORE_PASSWORD: "secret2"
> If I set the same password for both - it works just fine.
> Tested with the docker image 6.6.2 available here 
> https://hub.docker.com/_/solr/
> I don't know whether this is JAVA nuance or Solr implementation issue but 
> from security point of view there there is no point to have the same password 
> assigned for both the key store and private key bound to specific certificate.
> Expected behaviour: It should allow to specify different passwords.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11710) Add fuzzy, wildcard, and proximity query syntax to payload query parsers

2017-11-30 Thread Johnathan Bostrom (JIRA)
Johnathan Bostrom created SOLR-11710:


 Summary: Add fuzzy, wildcard, and proximity query syntax to 
payload query parsers
 Key: SOLR-11710
 URL: https://issues.apache.org/jira/browse/SOLR-11710
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Affects Versions: 7.0
Reporter: Johnathan Bostrom


The payload query parsers do not currently allow for special syntax such as 
wildcards, fuzzy search, and proximity search.  It would be useful to be able 
to run queries such as:  
 
{code}
{!payload_check f=text payloads='NOUN'}appel~1  
{!payload_check f=text payloads='NOUN'}app*  
{!payload_check f=text payloads='NOUN'}appl?  
{!payload_check f=text payloads='NOUN NOUN'}"apple core"~3
{code}  
  




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273380#comment-16273380
 ] 

Shawn Heisey commented on SOLR-11508:
-

bq. Wouldn't this possibly break existing Solr installations?

I hadn't thought of that, but I think you may be right.  If somebody has 
defined solr.data.home in a 7.1 install, then they upgrade Solr to a new 
version with this patch applied, I am pretty sure that the new version of Solr 
will not load any of the existing cores on its first startup, because there 
will not be any core.properties files for it to find.

This sounds like a really bad idea.

If you're specifying a different directory for data than you are for config, 
why do you want to have the core.properties file in the data directory?  Do you 
also want the conf directory to be moved?  if not, then this makes no sense at 
all to me.  If you DO want the conf directory to move, then you simply need to 
set solr.solr.home and NOT solr.data.home, and everything moves.

In my opinion, whoever set up the docker image for Solr did it wrong.  They 
should have used our service installer script, which would have put the program 
into /opt/solr and everything else in /var/solr, with solr.solr.home set to 
/var/solr/data.  Instead, the docker image has the solr home in 
/opt/solr/server/solr ... similar to what happens when somebody manually starts 
solr instead of starting an installed service.


> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11708) Streaming SolrJ clients hang for 50 seconds when closing the stream

2017-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11708.
---
Resolution: Invalid

Sorry for the noise, gotta add "close the solr cache" to my knowledge store.

> Streaming SolrJ clients hang for 50 seconds when closing the stream
> ---
>
> Key: SOLR-11708
> URL: https://issues.apache.org/jira/browse/SOLR-11708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 7.1, 6.6.2
>Reporter: Erick Erickson
> Attachments: Main.java
>
>
> I'll attach my client program in a second but the gist is that closing a 
> CloudSolrStream hangs the client for about 50 seconds no matter how it's 
> closed.
> Setup:
> - Create a 2-shard, leader-only collection
> - Index a million documents to it
> - run the attached program.
> At the end you'll see the message printed out "We finally stopped", but the 
> program hands for roughly 50 seconds before exiting.
> The hang happens with all these situations:
> - read to EOF tuple
> - stop part way through
> - close in a finally block
> - close with try-with-resources
> In the early-termination case, I do see the following (expected) error in 
> solr's log:
> ERROR - 2017-11-30 16:18:20.024; [c:eoe s:shard2 r:core_node4 
> x:eoe_shard2_replica_n2] org.apache.solr.common.SolrException; null:Early 
> Client Disconnect
> I see the same behavior in 7.0
> The claim is this worked in earlier 7x versions, 7.0 in particular. I'll test 
> that shortly and report results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273316#comment-16273316
 ] 

Erick Erickson commented on SOLR-11508:
---

>From the comment at the PR:
bq: Those who need to revert to the old way can define coreRootDirectory in 
solr.xml, if they hadn't already.

WARNING: I haven't looked at the code, but this seems like a bad idea. So I may 
be out in left field, but

Wouldn't this possibly break existing Solr installations? If so I'd far rather 
see the _new_ way of doing things be optional.

What's the advantage of adding this layer versus setting solr.solr.home?

Maybe I'm totally missing the point...

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8070:
-
Attachment: LUCENE-8070.patch

New patch. I did not create a method but add a new variable to PlanetModel. It 
only needs to be calculated once and follows the pattern for properties of the 
planet. 

> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch, 
> LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273273#comment-16273273
 ] 

Shawn Heisey commented on SOLR-11508:
-

There is a properties file that I believe should be in dataDir, but currently 
gets put into the conf directory:  dataimport.properties.  Not sure where it 
ends up in cloud mode, since there is no conf directory.

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273271#comment-16273271
 ] 

Karl Wright commented on LUCENE-8070:
-

[~ivera], I like the fix, but I'd move the following to PlanetModel as a method 
of its own:

+final double maxRadius = 
Math.min(planetModel.surfaceDistance(planetModel.NORTH_POLE, 
planetModel.SOUTH_POLE),
+planetModel.surfaceDistance(planetModel.MIN_X_POLE, 
planetModel.MAX_X_POLE));

Thanks!


> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright reassigned LUCENE-8070:
---

Assignee: Karl Wright

> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 95 - Still unstable

2017-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/95/

13 tests failed.
FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7794D1F1ABF45364]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexSorting

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([7794D1F1ABF45364]:0)


FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomBig

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([5EA223D527D67E44]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.spatial3d.TestGeo3DPoint

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([5EA223D527D67E44]:0)


FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
The Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([5B2AB528A5E4DFF6:D37E8AF20B18B20E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1

[jira] [Commented] (SOLR-11508) core.properties should be stored $solr.data.home/$core.name

2017-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273245#comment-16273245
 ] 

Shawn Heisey commented on SOLR-11508:
-

I thought the entire point of solr.data.home was to separate dataDir from 
instanceDir, without having to specify dataDir explicitly in every 
core.properties file.  When using solr.data.home, the instanceDir likely only 
contains core.properties and the conf directory, though of course when running 
SolrCloud, there is no conf directory.

The core.properties file isn't data, it's config, so I do not think it should 
be in the solr.data.home location.

> core.properties should be stored $solr.data.home/$core.name
> ---
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful where running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> Unfortunately, while core data is stored in 
> {{$\{solr.data.home}/$\{core.name}/index/...}}, core.properties is stored in 
> {{$\{solr.solr.home}/$\{core.name}/core.properties}}.
> Reading SOLR-6671 comments, I think this was the expected behaviour but I 
> don't think it is the correct one.
> In addition to being inelegant and counterintuitive, this has the drawback of 
> stripping a core of its metadata and breaking core discovery when a Solr 
> installation is redeployed, whether in Docker or not.
> core.properties is mostly metadata and although it contains some 
> configuration, this configuration is specific to the core it accompanies. I 
> believe it should be stored in solr.data.home, with the rest of the data it 
> describes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11709) JSON "Stats" Facets should support directly specifying a domain change (for filters/blockjoin/etc...)

2017-11-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273144#comment-16273144
 ] 

Hoss Man commented on SOLR-11709:
-

Perhaps we should add a {{type:"stat"}} long form for statistics?

so that this...

{code}
foo:"max(popularity)"
{code}

...becomes syntactic sugar for...
{code}
foo: { 
  type:"stat",
  stat:"max(popularity)"
}
{code}

...where the later can be augmented to include an explicit domain...

{code}
foo: { 
  type:"stat",
  stat:"max(popularity)"
  domain: {
excludeTags: "mytag"
  }
}
{code}

?

> JSON "Stats" Facets should support directly specifying a domain change (for 
> filters/blockjoin/etc...)
> -
>
> Key: SOLR-11709
> URL: https://issues.apache.org/jira/browse/SOLR-11709
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> AFAICT, the simple string syntax of JSON Facet Modules "statistic facets" 
> (ex: {{foo:"min(fieldA)"}} ) means there is no way to request a statistic 
> with a domain change applied -- stats are always computed relative to it's 
> immediate parent (ie: the baseset matching the {{q}} for a top level stat, or 
> the constrained set if a stat is a subfacet of something else)
> This means that things like the simple "fq exclusion" in StatsComponent have 
> no straight forward equivalent in JSON faceting. 
> The work around appears to be to use a {{type:"query", q:"*:*, domain:...}} 
> parent and specify the stats you are interested in as sub-facets...
> {code}
> $ curl 'http://localhost:8983/solr/techproducts/query' -d 
> 'q=*:*&omitHeader=true&fq={!tag=boo}id:hoss&stats=true&stats.field={!max=true 
> ex=boo}popularity&rows=0&json.facet={
> bar: { type:"query", q:"*:*", domain:{excludeTags:boo}, facet: { 
> foo:"max(popularity)" } } }'
> {
>   "response":{"numFound":0,"start":0,"docs":[]
>   },
>   "facets":{
> "count":0,
> "bar":{
>   "count":32,
>   "foo":10}},
>   "stats":{
> "stats_fields":{
>   "popularity":{
> "max":10.0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11709) JSON "Stats" Facets should support directly specifying a domain change (for filters/blockjoin/etc...)

2017-11-30 Thread Hoss Man (JIRA)
Hoss Man created SOLR-11709:
---

 Summary: JSON "Stats" Facets should support directly specifying a 
domain change (for filters/blockjoin/etc...)
 Key: SOLR-11709
 URL: https://issues.apache.org/jira/browse/SOLR-11709
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


AFAICT, the simple string syntax of JSON Facet Modules "statistic facets" (ex: 
{{foo:"min(fieldA)"}} ) means there is no way to request a statistic with a 
domain change applied -- stats are always computed relative to it's immediate 
parent (ie: the baseset matching the {{q}} for a top level stat, or the 
constrained set if a stat is a subfacet of something else)

This means that things like the simple "fq exclusion" in StatsComponent have no 
straight forward equivalent in JSON faceting. 

The work around appears to be to use a {{type:"query", q:"*:*, domain:...}} 
parent and specify the stats you are interested in as sub-facets...

{code}
$ curl 'http://localhost:8983/solr/techproducts/query' -d 
'q=*:*&omitHeader=true&fq={!tag=boo}id:hoss&stats=true&stats.field={!max=true 
ex=boo}popularity&rows=0&json.facet={
bar: { type:"query", q:"*:*", domain:{excludeTags:boo}, facet: { 
foo:"max(popularity)" } } }'
{
  "response":{"numFound":0,"start":0,"docs":[]
  },
  "facets":{
"count":0,
"bar":{
  "count":32,
  "foo":10}},
  "stats":{
"stats_fields":{
  "popularity":{
"max":10.0
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8068) Allow IndexWriter to write a single DWPT to disk

2017-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273069#comment-16273069
 ] 

ASF subversion and git services commented on LUCENE-8068:
-

Commit 53c185aa35fa5fbb6d73e4bc0cc56e0fd0da0b33 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=53c185a ]

LUCENE-8068: Allow IndexWriter to write a single DWPT to disk

Adds a `flushNextBuffer` method to IndexWriter that allows the caller to
synchronously move the next pending or the biggest non-pending index buffer to
disk. This enables flushing selected buffer to disk without highjacking an
indexing thread. This is for instance useful if more than one IW (shards) must
be maintained in a single JVM / system.


> Allow IndexWriter to write a single DWPT to disk
> 
>
> Key: LUCENE-8068
> URL: https://issues.apache.org/jira/browse/LUCENE-8068
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8068.patch, LUCENE-8068.patch, LUCENE-8068.patch, 
> LUCENE-8068.patch
>
>
> Today we IW can only flush a DWPT to disk if an external resource calls 
> flush()  or refreshes a NRT reader or if a DWPT is selected as flush pending. 
> Yet, the latter has the problem that it always ties up an indexing thread and 
> if flush / NRT refresh is called a whole bunch of indexing threads is tied 
> up. If IW could offer a simple `flushNextBuffer()` method that synchronously 
> flushes the next pending or biggest active buffer to disk memory could be 
> controlled in a more fine granular fashion from outside of the IW. This is 
> for instance useful if more than one IW (shards) must be maintained in a 
> single JVM / system. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8068) Allow IndexWriter to write a single DWPT to disk

2017-11-30 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8068.
-
Resolution: Fixed

> Allow IndexWriter to write a single DWPT to disk
> 
>
> Key: LUCENE-8068
> URL: https://issues.apache.org/jira/browse/LUCENE-8068
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8068.patch, LUCENE-8068.patch, LUCENE-8068.patch, 
> LUCENE-8068.patch
>
>
> Today we IW can only flush a DWPT to disk if an external resource calls 
> flush()  or refreshes a NRT reader or if a DWPT is selected as flush pending. 
> Yet, the latter has the problem that it always ties up an indexing thread and 
> if flush / NRT refresh is called a whole bunch of indexing threads is tied 
> up. If IW could offer a simple `flushNextBuffer()` method that synchronously 
> flushes the next pending or biggest active buffer to disk memory could be 
> controlled in a more fine granular fashion from outside of the IW. This is 
> for instance useful if more than one IW (shards) must be maintained in a 
> single JVM / system. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8068) Allow IndexWriter to write a single DWPT to disk

2017-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273060#comment-16273060
 ] 

ASF subversion and git services commented on LUCENE-8068:
-

Commit 01d12777c4bcab7ae8085d5ed5e1b20a0e1a5526 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=01d1277 ]

LUCENE-8068: Allow IndexWriter to write a single DWPT to disk

Adds a `flushNextBuffer` method to IndexWriter that allows the caller to
synchronously move the next pending or the biggest non-pending index buffer to
disk. This enables flushing selected buffer to disk without highjacking an
indexing thread. This is for instance useful if more than one IW (shards) must
be maintained in a single JVM / system.


> Allow IndexWriter to write a single DWPT to disk
> 
>
> Key: LUCENE-8068
> URL: https://issues.apache.org/jira/browse/LUCENE-8068
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8068.patch, LUCENE-8068.patch, LUCENE-8068.patch, 
> LUCENE-8068.patch
>
>
> Today we IW can only flush a DWPT to disk if an external resource calls 
> flush()  or refreshes a NRT reader or if a DWPT is selected as flush pending. 
> Yet, the latter has the problem that it always ties up an indexing thread and 
> if flush / NRT refresh is called a whole bunch of indexing threads is tied 
> up. If IW could offer a simple `flushNextBuffer()` method that synchronously 
> flushes the next pending or biggest active buffer to disk memory could be 
> controlled in a more fine granular fashion from outside of the IW. This is 
> for instance useful if more than one IW (shards) must be maintained in a 
> single JVM / system. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8070:
-
Component/s: modules/spatial3d

> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11708) Streaming SolrJ clients hang for 50 seconds when closing the stream

2017-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273044#comment-16273044
 ] 

Joel Bernstein commented on SOLR-11708:
---

I see the problem in your code I believe. You need to explicitly close the 
SolrClientCache to exit your program. This should cause your program to exit 
immediately.

> Streaming SolrJ clients hang for 50 seconds when closing the stream
> ---
>
> Key: SOLR-11708
> URL: https://issues.apache.org/jira/browse/SOLR-11708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 7.1, 6.6.2
>Reporter: Erick Erickson
> Attachments: Main.java
>
>
> I'll attach my client program in a second but the gist is that closing a 
> CloudSolrStream hangs the client for about 50 seconds no matter how it's 
> closed.
> Setup:
> - Create a 2-shard, leader-only collection
> - Index a million documents to it
> - run the attached program.
> At the end you'll see the message printed out "We finally stopped", but the 
> program hands for roughly 50 seconds before exiting.
> The hang happens with all these situations:
> - read to EOF tuple
> - stop part way through
> - close in a finally block
> - close with try-with-resources
> In the early-termination case, I do see the following (expected) error in 
> solr's log:
> ERROR - 2017-11-30 16:18:20.024; [c:eoe s:shard2 r:core_node4 
> x:eoe_shard2_replica_n2] org.apache.solr.common.SolrException; null:Early 
> Client Disconnect
> I see the same behavior in 7.0
> The claim is this worked in earlier 7x versions, 7.0 in particular. I'll test 
> that shortly and report results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11705) Java Class Cast Exception while loading custom plugin

2017-11-30 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-11705.
-
Resolution: Invalid

> Java Class Cast Exception while loading custom plugin
> -
>
> Key: SOLR-11705
> URL: https://issues.apache.org/jira/browse/SOLR-11705
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.1
>Reporter: As Ma
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8070:
-
Attachment: LUCENE-8070.patch
LUCENE-8070-test.patch

Attached is a test showing the issue and a possible solution.

The solution (I propose) tries to prevent such circles to be constructed. The 
(maybe naive) assumption is that we should not allow circles that contain 
antipodal points. I was hoping to find a way to know the shorter distance the 
antipodal points on a planet but it seems harder than I expected. It seems to 
work to choose the smaller distance between the poles of the planet (at least 
for WGS84-like planets). What do you think?

P.S: I have changed the message of a thrown error as it should not be true any 
more. 

> GeoExactCircle should not create circles that they do not fit in spheroid 
> --
>
> Key: LUCENE-8070
> URL: https://issues.apache.org/jira/browse/LUCENE-8070
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
> Attachments: LUCENE-8070-test.patch, LUCENE-8070.patch
>
>
> Hi [~daddywri],
> I have seen test fail when we try to create circles that the don' t fit in 
> the planet. I think sectors of the circle start overlapping to each other and 
> the shape becomes invalid. The shape should prevent that to happen.
> I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11708) Streaming SolrJ clients hang for 50 seconds when closing the stream

2017-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273006#comment-16273006
 ] 

Joel Bernstein commented on SOLR-11708:
---

This is not reproducing for me on 7x with SolrStream. Both closing before and 
after the EOF tuple causes immediate exit with the SolrStream.
I'll test with CloudSolrStream now...

> Streaming SolrJ clients hang for 50 seconds when closing the stream
> ---
>
> Key: SOLR-11708
> URL: https://issues.apache.org/jira/browse/SOLR-11708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 7.1, 6.6.2
>Reporter: Erick Erickson
> Attachments: Main.java
>
>
> I'll attach my client program in a second but the gist is that closing a 
> CloudSolrStream hangs the client for about 50 seconds no matter how it's 
> closed.
> Setup:
> - Create a 2-shard, leader-only collection
> - Index a million documents to it
> - run the attached program.
> At the end you'll see the message printed out "We finally stopped", but the 
> program hands for roughly 50 seconds before exiting.
> The hang happens with all these situations:
> - read to EOF tuple
> - stop part way through
> - close in a finally block
> - close with try-with-resources
> In the early-termination case, I do see the following (expected) error in 
> solr's log:
> ERROR - 2017-11-30 16:18:20.024; [c:eoe s:shard2 r:core_node4 
> x:eoe_shard2_replica_n2] org.apache.solr.common.SolrException; null:Early 
> Client Disconnect
> I see the same behavior in 7.0
> The claim is this worked in earlier 7x versions, 7.0 in particular. I'll test 
> that shortly and report results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8070) GeoExactCircle should not create circles that they do not fit in spheroid

2017-11-30 Thread Ignacio Vera (JIRA)
Ignacio Vera created LUCENE-8070:


 Summary: GeoExactCircle should not create circles that they do not 
fit in spheroid 
 Key: LUCENE-8070
 URL: https://issues.apache.org/jira/browse/LUCENE-8070
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ignacio Vera


Hi [~daddywri],

I have seen test fail when we try to create circles that the don' t fit in the 
planet. I think sectors of the circle start overlapping to each other and the 
shape becomes invalid. The shape should prevent that to happen.

I will attach a test and a proposed solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11706) JSON FacetModule can't compute stats (min,max,etc...) on multivalued fields

2017-11-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272999#comment-16272999
 ] 

Hoss Man commented on SOLR-11706:
-

bq. ... I was pointing out how other stats could do the same thing.

Oh, oh ... i'm sorry, i understand now:  Some of the ground work has already 
been laid in MinMax, and similar work could be done in other aggs.  Got it.

> JSON FacetModule can't compute stats (min,max,etc...) on multivalued fields
> ---
>
> Key: SOLR-11706
> URL: https://issues.apache.org/jira/browse/SOLR-11706
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-11706.patch
>
>
> While trying to write some tests demonstrating equivalences between the 
> StatsComponent and the JSON FacetModule i discovered that the FacetModules 
> stat functions (min, max, etc...) don't seem to work on multivalued fields.
> Based on the stack traces, i gather the problem is because the FacetModule 
> seems to rely exclusively on using the "Function" parsers to get a value 
> source -- apparently w/o any other method of accumulating numeric stats from 
> multivalued (numeric) DocValues?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11708) Streaming SolrJ clients hang for 50 seconds when closing the stream

2017-11-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272990#comment-16272990
 ] 

Erick Erickson commented on SOLR-11708:
---

This is stranger and stranger. If I properly close the stream I can re-use it 
immediately, the hang is happening _only_ on trying to exit the program.

> Streaming SolrJ clients hang for 50 seconds when closing the stream
> ---
>
> Key: SOLR-11708
> URL: https://issues.apache.org/jira/browse/SOLR-11708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 7.1, 6.6.2
>Reporter: Erick Erickson
> Attachments: Main.java
>
>
> I'll attach my client program in a second but the gist is that closing a 
> CloudSolrStream hangs the client for about 50 seconds no matter how it's 
> closed.
> Setup:
> - Create a 2-shard, leader-only collection
> - Index a million documents to it
> - run the attached program.
> At the end you'll see the message printed out "We finally stopped", but the 
> program hands for roughly 50 seconds before exiting.
> The hang happens with all these situations:
> - read to EOF tuple
> - stop part way through
> - close in a finally block
> - close with try-with-resources
> In the early-termination case, I do see the following (expected) error in 
> solr's log:
> ERROR - 2017-11-30 16:18:20.024; [c:eoe s:shard2 r:core_node4 
> x:eoe_shard2_replica_n2] org.apache.solr.common.SolrException; null:Early 
> Client Disconnect
> I see the same behavior in 7.0
> The claim is this worked in earlier 7x versions, 7.0 in particular. I'll test 
> that shortly and report results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11708) Streaming SolrJ clients hang for 50 seconds when closing the stream

2017-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11708:
--
Attachment: Main.java

Sample program. The bit where it terminates early is irrelevant, I still get 
the 50 second hang no matter whether I actually read through to EOF or not

> Streaming SolrJ clients hang for 50 seconds when closing the stream
> ---
>
> Key: SOLR-11708
> URL: https://issues.apache.org/jira/browse/SOLR-11708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 7.1, 6.6.2
>Reporter: Erick Erickson
> Attachments: Main.java
>
>
> I'll attach my client program in a second but the gist is that closing a 
> CloudSolrStream hangs the client for about 50 seconds no matter how it's 
> closed.
> Setup:
> - Create a 2-shard, leader-only collection
> - Index a million documents to it
> - run the attached program.
> At the end you'll see the message printed out "We finally stopped", but the 
> program hands for roughly 50 seconds before exiting.
> The hang happens with all these situations:
> - read to EOF tuple
> - stop part way through
> - close in a finally block
> - close with try-with-resources
> In the early-termination case, I do see the following (expected) error in 
> solr's log:
> ERROR - 2017-11-30 16:18:20.024; [c:eoe s:shard2 r:core_node4 
> x:eoe_shard2_replica_n2] org.apache.solr.common.SolrException; null:Early 
> Client Disconnect
> I see the same behavior in 7.0
> The claim is this worked in earlier 7x versions, 7.0 in particular. I'll test 
> that shortly and report results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11708) Streaming SolrJ clients hang for 50 seconds when closing the stream

2017-11-30 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-11708:
-

 Summary: Streaming SolrJ clients hang for 50 seconds when closing 
the stream
 Key: SOLR-11708
 URL: https://issues.apache.org/jira/browse/SOLR-11708
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.2, 7.1, 7.0
Reporter: Erick Erickson


I'll attach my client program in a second but the gist is that closing a 
CloudSolrStream hangs the client for about 50 seconds no matter how it's closed.

Setup:
- Create a 2-shard, leader-only collection
- Index a million documents to it
- run the attached program.

At the end you'll see the message printed out "We finally stopped", but the 
program hands for roughly 50 seconds before exiting.

The hang happens with all these situations:
- read to EOF tuple
- stop part way through
- close in a finally block
- close with try-with-resources

In the early-termination case, I do see the following (expected) error in 
solr's log:

ERROR - 2017-11-30 16:18:20.024; [c:eoe s:shard2 r:core_node4 
x:eoe_shard2_replica_n2] org.apache.solr.common.SolrException; null:Early 
Client Disconnect

I see the same behavior in 7.0

The claim is this worked in earlier 7x versions, 7.0 in particular. I'll test 
that shortly and report results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4100) Maxscore - Efficient Scoring

2017-11-30 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4100:
-
Attachment: LUCENE-4100.patch

Updated patch that applies Robert's feedback and simplifies things a bit by 
assuming that scores are positive for instance (LUCENE-7996).

> Maxscore - Efficient Scoring
> 
>
> Key: LUCENE-4100
> URL: https://issues.apache.org/jira/browse/LUCENE-4100
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs, core/query/scoring, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Stefan Pohl
>  Labels: api-change, gsoc2014, patch, performance
> Fix For: master (8.0)
>
> Attachments: LUCENE-4100.patch, LUCENE-4100.patch, LUCENE-4100.patch, 
> LUCENE-4100.patch, contrib_maxscore.tgz, maxscore.patch
>
>
> At Berlin Buzzwords 2012, I will be presenting 'maxscore', an efficient 
> algorithm first published in the IR domain in 1995 by H. Turtle & J. Flood, 
> that I find deserves more attention among Lucene users (and developers).
> I implemented a proof of concept and did some performance measurements with 
> example queries and lucenebench, the package of Mike McCandless, resulting in 
> very significant speedups.
> This ticket is to get started the discussion on including the implementation 
> into Lucene's codebase. Because the technique requires awareness about it 
> from the Lucene user/developer, it seems best to become a contrib/module 
> package so that it consciously can be chosen to be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11676) nrt replicas is always 1 when not specified

2017-11-30 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272856#comment-16272856
 ] 

Amrit Sarkar edited comment on SOLR-11676 at 11/30/17 4:07 PM:
---

Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very 
poorly written in terms of verifying actual collection properties passed.

{code}
modified:   
solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java
modified:   
solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
{code} 

If we decide to write tests for the same, it will be tad difficult.


was (Author: sarkaramr...@gmail.com):
Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very 
poorly written in terms of verifying actual collection properties passed.

{code}
modified:   
solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java
modified:   
solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
{code} 

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-11676.patch
>
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11676) nrt replicas is always 1 when not specified

2017-11-30 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11676:

Attachment: SOLR-11676.patch

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-11676.patch
>
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11676) nrt replicas is always 1 when not specified

2017-11-30 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272856#comment-16272856
 ] 

Amrit Sarkar commented on SOLR-11676:
-

Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very 
poorly written in terms of verifying actual collection properties passed.

{code}
modified:   
solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java
modified:   
solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
{code} 

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8033) Should FieldInfos always use a dense encoding?

2017-11-30 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272817#comment-16272817
 ] 

Michael Braun commented on LUCENE-8033:
---

Sorry [~jpountz] - delete by query. Don't have the snapshot of the sampling 
handy, but it was spending a large amount of time constructing the FieldInfos, 
all spent in the adding to byNumber within the constructor - which is dropped 
in the case of the dense case, though it is used so the FieldInfos are sorted 
in the dense case too - one would need to do sort on another structure at 
minimum the way it looks right now, not 100% this would even be faster but 
hopefully!

 [~dsmiley] exactly, this was a significant amount of time.

> Should FieldInfos always use a dense encoding?
> --
>
> Key: LUCENE-8033
> URL: https://issues.apache.org/jira/browse/LUCENE-8033
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Trivial
>  Labels: newdev
>
> Spin-off from LUCENE-8018. The dense vs. sparse encoding logic of FieldInfos 
> introduces  complexity. Given that the sparse encoding is only used when less 
> than 1/16th of fields are used, which sounds uncommon to me, maybe we should 
> use a dense encoding all the time?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8033) Should FieldInfos always use a dense encoding?

2017-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272794#comment-16272794
 ] 

David Smiley commented on LUCENE-8033:
--

[~mbraun688] to clarify what you said, do you mean adding to the TreeMap was 
taking a significant amount of time for you?  (and thus further evidence we 
should remove the sparse encoding)

> Should FieldInfos always use a dense encoding?
> --
>
> Key: LUCENE-8033
> URL: https://issues.apache.org/jira/browse/LUCENE-8033
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Trivial
>  Labels: newdev
>
> Spin-off from LUCENE-8018. The dense vs. sparse encoding logic of FieldInfos 
> introduces  complexity. Given that the sparse encoding is only used when less 
> than 1/16th of fields are used, which sounds uncommon to me, maybe we should 
> use a dense encoding all the time?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8033) Should FieldInfos always use a dense encoding?

2017-11-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272793#comment-16272793
 ] 

Adrien Grand commented on LUCENE-8033:
--

What is DBQ? Also it's not clear for me whether you're talking about 
byNumberMap (sparse encoding) or byNumberTable (dense encoding).

> Should FieldInfos always use a dense encoding?
> --
>
> Key: LUCENE-8033
> URL: https://issues.apache.org/jira/browse/LUCENE-8033
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Trivial
>  Labels: newdev
>
> Spin-off from LUCENE-8018. The dense vs. sparse encoding logic of FieldInfos 
> introduces  complexity. Given that the sparse encoding is only used when less 
> than 1/16th of fields are used, which sounds uncommon to me, maybe we should 
> use a dense encoding all the time?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement

2017-11-30 Thread Karthik Ramachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272790#comment-16272790
 ] 

Karthik Ramachandran commented on SOLR-11622:
-

[~talli...@mitre.org] with this patch we were able to process EML files, can 
you review the changes?

> Bundled mime4j library not sufficient for Tika requirement
> --
>
> Key: SOLR-11622
> URL: https://issues.apache.org/jira/browse/SOLR-11622
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Affects Versions: 7.1, 6.6.2
>Reporter: Karim Malhas
>Assignee: Karthik Ramachandran
>Priority: Minor
>  Labels: build
> Attachments: SOLR-11622.patch
>
>
> The version 7.2 of Apache James Mime4j bundled with the Solr binary releases 
> does not match what is required by Apache Tika for parsing rfc2822 messages. 
> The master branch for james-mime4j seems to contain the missing Builder class
> [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java
> ]
> This prevents import of rfc2822 formatted messages. For example like so:
> {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt'
> }}
> And results in the following stacktrace:
> java.lang.NoClassDefFoundError: 
> org/apache/james/mime4j/stream/MimeConfig$Builder
> at 
> org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63)
> at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
> at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
> at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135)
> at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> at 
> org.eclipse.j

[jira] [Commented] (LUCENE-8033) Should FieldInfos always use a dense encoding?

2017-11-30 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272779#comment-16272779
 ] 

Michael Braun commented on LUCENE-8033:
---

For us using a lot of fields, the adding to byNumber when initializing 
FieldInfos actually takes a significant about of time during DBQs as shown by 
sampling. 

> Should FieldInfos always use a dense encoding?
> --
>
> Key: LUCENE-8033
> URL: https://issues.apache.org/jira/browse/LUCENE-8033
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Trivial
>  Labels: newdev
>
> Spin-off from LUCENE-8018. The dense vs. sparse encoding logic of FieldInfos 
> introduces  complexity. Given that the sparse encoding is only used when less 
> than 1/16th of fields are used, which sounds uncommon to me, maybe we should 
> use a dense encoding all the time?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11676) nrt replicas is always 1 when not specified

2017-11-30 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272773#comment-16272773
 ] 

Amrit Sarkar commented on SOLR-11676:
-

Varun I can see what are you saying:

{{CreateCollectionCmd}}::
{code}
  int numNrtReplicas = message.getInt(NRT_REPLICAS, 
message.getInt(REPLICATION_FACTOR, numTlogReplicas>0?0:1));
{code}

But this code suggests, it will pick {{replicationFactor}} positively. I will 
put a debugger and test.



> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11403) Convert some autoscaling tests to use the simulation framework

2017-11-30 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11403.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> Convert some autoscaling tests to use the simulation framework
> --
>
> Key: SOLR-11403
> URL: https://issues.apache.org/jira/browse/SOLR-11403
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.2, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11397) Implement simulated DistributedQueue

2017-11-30 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11397.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> Implement simulated DistributedQueue
> 
>
> Key: SOLR-11397
> URL: https://issues.apache.org/jira/browse/SOLR-11397
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.2, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11396) Implement simulated ClusterDataProvider

2017-11-30 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11396.
--
   Resolution: Fixed
Fix Version/s: 7.2

> Implement simulated ClusterDataProvider
> ---
>
> Key: SOLR-11396
> URL: https://issues.apache.org/jira/browse/SOLR-11396
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.2, master (8.0)
>
>
> Implement a simulated {{ClusterDataProvider}} that can simulate per-node 
> data, nodes going down and up, replica placement and operations, etc.
> It should be also possible to initialize this simulator using real data 
> samples, eg. a {{ClusterState}} instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11642) Implement ObjectCache for keeping shared state in SolrCloudManager

2017-11-30 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11642.
--
Resolution: Fixed

> Implement ObjectCache for keeping shared state in SolrCloudManager
> --
>
> Key: SOLR-11642
> URL: https://issues.apache.org/jira/browse/SOLR-11642
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> This is needed to get rid of Policy.Session caching tin 
> OverseerCollectionMessageHandler.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8043) Attempting to add documents past limit can corrupt index

2017-11-30 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated LUCENE-8043:
-
Attachment: YCS_IndexTest7a.java

The test code is just a modification of the previous code I was using.
I didn't think that test code would reproduce the issue for lucene-master, but 
I reverted all my other changes to IW, and it does reproduce (w/o your patch)!  
Uploaded YCS_IndexTest7a.java

This can often reproduce in as little as 4 documents indexed in 2 threads for 
me.
{code}
## STARTING INDEXING RUN 0  IW.pendingNumDocs=0
## IW.pendingNumDocs=2
ABOUT TO CALL commit
READER: reader.maxDoc=2 IW.pendingNumDocs=2
## STARTING INDEXING RUN 1  IW.pendingNumDocs=2
## IW.pendingNumDocs=0
ABOUT TO CALL commit
READER: reader.maxDoc=2 IW.pendingNumDocs=0
ERROR!!: reader.maxDoc=2 IW.pendingNumDocs=0
After sleep,commit,close reader.maxDoc=2 IW.pendingNumDocs=0
{code}

Still needs to be turned into a proper unit test, preferably w/o any sleeps.


> Attempting to add documents past limit can corrupt index
> 
>
> Key: LUCENE-8043
> URL: https://issues.apache.org/jira/browse/LUCENE-8043
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10, 7.0, master (8.0)
>Reporter: Yonik Seeley
>Assignee: Simon Willnauer
> Attachments: LUCENE-8043.patch, YCS_IndexTest7a.java
>
>
> The IndexWriter check for too many documents does not always work, resulting 
> in going over the limit.  Once this happens, Lucene refuses to open the index 
> and throws a CorruptIndexException: Too many documents.
> This appears to affect all versions of Lucene/Solr (the check was first 
> implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in 
> 4.10) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11705) Java Class Cast Exception while loading custom plugin

2017-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272693#comment-16272693
 ] 

Shawn Heisey commented on SOLR-11705:
-

In addition to the fact that we need details that were not provided... problems 
like this are almost always code/config issues, not bugs.  This issue tracker 
is primarily for bugs and enhancement requests, and this issue is sounding like 
a support request.  For support, we ask that you use the mailing list or the 
IRC channel.

http://lucene.apache.org/solr/community.html#mailing-lists-irc


> Java Class Cast Exception while loading custom plugin
> -
>
> Key: SOLR-11705
> URL: https://issues.apache.org/jira/browse/SOLR-11705
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.1
>Reporter: As Ma
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8033) Should FieldInfos always use a dense encoding?

2017-11-30 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8033:
-
Labels: newdev  (was: )

> Should FieldInfos always use a dense encoding?
> --
>
> Key: LUCENE-8033
> URL: https://issues.apache.org/jira/browse/LUCENE-8033
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Trivial
>  Labels: newdev
>
> Spin-off from LUCENE-8018. The dense vs. sparse encoding logic of FieldInfos 
> introduces  complexity. Given that the sparse encoding is only used when less 
> than 1/16th of fields are used, which sounds uncommon to me, maybe we should 
> use a dense encoding all the time?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7996) Should we require positive scores?

2017-11-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272516#comment-16272516
 ] 

Adrien Grand commented on LUCENE-7996:
--

This might be a better option indeed.

> Should we require positive scores?
> --
>
> Key: LUCENE-7996
> URL: https://issues.apache.org/jira/browse/LUCENE-7996
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7996.patch
>
>
> Having worked on MAXSCORE recently, things would be simpler if we required 
> that scores are positive. Practically, this would mean 
>  - forbidding/fixing similarities that may produce negative scores (we have 
> some of them)
>  - forbidding things like negative boosts
>  - fixing the scoring formula of some queries like {{BoostingQuery}} (which 
> subtracts a score to another score) so that the end result may never be 
> negative
> So I'd be curious to have opinions whether this would be a sane requirement 
> or whether we need to be able to cope with negative scores eg. because some 
> similarities that we want to support produce negative scores by design.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 93 - Still Failing

2017-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/93/

No tests ran.

Build Log:
[...truncated 28042 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.08 sec (3.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.2.0-src.tgz...
   [smoker] 31.2 MB in 0.04 sec (738.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.0.tgz...
   [smoker] 71.0 MB in 0.19 sec (377.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.0.zip...
   [smoker] 81.5 MB in 0.10 sec (828.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6227 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6227 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (17.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.2.0-src.tgz...
   [smoker] 53.2 MB in 1.90 sec (28.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.2.0.tgz...
   [smoker] 146.1 MB in 4.73 sec (30.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.2.0.zip...
   [smoker] 147.1 MB in 4.51 sec (32.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.2.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]   [\]   [|]   [/]  


[jira] [Commented] (LUCENE-7996) Should we require positive scores?

2017-11-30 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272506#comment-16272506
 ] 

Alan Woodward commented on LUCENE-7996:
---

Rather than throwing errors from BoostingQuery and FunctionScoreQuery, could we 
just set the score to 0?  And document that functions that return negative 
values will be truncated to 0 for scoring purposes.

> Should we require positive scores?
> --
>
> Key: LUCENE-7996
> URL: https://issues.apache.org/jira/browse/LUCENE-7996
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7996.patch
>
>
> Having worked on MAXSCORE recently, things would be simpler if we required 
> that scores are positive. Practically, this would mean 
>  - forbidding/fixing similarities that may produce negative scores (we have 
> some of them)
>  - forbidding things like negative boosts
>  - fixing the scoring formula of some queries like {{BoostingQuery}} (which 
> subtracts a score to another score) so that the end result may never be 
> negative
> So I'd be curious to have opinions whether this would be a sane requirement 
> or whether we need to be able to cope with negative scores eg. because some 
> similarities that we want to support produce negative scores by design.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11458) Bugs in MoveReplicaCmd handling of failures

2017-11-30 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272262#comment-16272262
 ] 

Cao Manh Dat edited comment on SOLR-11458 at 11/30/17 10:33 AM:


This bug relates to HDFS lease recovery. When tlog files of a replica 
(core_node7 in this case) get deleted and get recovered when a new collection 
of the same name gets created.

[~markrmil...@gmail.com] : for newly created core, should we skip lease 
recovery??



was (Author: caomanhdat):
This bug relates to HDFS lease recovery. When data dir of a replica (core_node7 
in this case) get deleted and get recovered when a new collection of the same 
name gets created.

[~markrmil...@gmail.com] : for newly created core, should we skip lease 
recovery??


> Bugs in MoveReplicaCmd handling of failures
> ---
>
> Key: SOLR-11458
> URL: https://issues.apache.org/jira/browse/SOLR-11458
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 7.0.1, 7.1, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>
> Spin-off from SOLR-11449:
> {quote}
> There's a section of code in moveNormalReplica that ensures that we don't 
> lose a shard leader during move. There's no corresponding protection in 
> moveHdfsReplica, which means that moving a replica that is also a shard 
> leader may potentially lead to data loss (eg. when replicationFactor=1).
> Also, there's no rollback strategy when moveHdfsReplica partially fails, 
> unlike in moveNormalReplica where the code simply skips deleting the original 
> replica - it seems that the code should attempt to restore the original 
> replica in this case? When RF=1 and such failure occurs then not restoring 
> the original replica means lost shard.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7996) Should we require positive scores?

2017-11-30 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7996:
-
Attachment: LUCENE-7996.patch

Here is a patch:
 - boosts must be positive.
 - AssertingScorer checks that scores are positive.
 - AssertingSimilarity checks that scores are positive regardless of the boost.
 - FunctionScoreQuery fails when the value source produces a negative value, 
but unfortunately this only occurs at runtime.

Any opinions?

> Should we require positive scores?
> --
>
> Key: LUCENE-7996
> URL: https://issues.apache.org/jira/browse/LUCENE-7996
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7996.patch
>
>
> Having worked on MAXSCORE recently, things would be simpler if we required 
> that scores are positive. Practically, this would mean 
>  - forbidding/fixing similarities that may produce negative scores (we have 
> some of them)
>  - forbidding things like negative boosts
>  - fixing the scoring formula of some queries like {{BoostingQuery}} (which 
> subtracts a score to another score) so that the end result may never be 
> negative
> So I'd be curious to have opinions whether this would be a sane requirement 
> or whether we need to be able to cope with negative scores eg. because some 
> similarities that we want to support produce negative scores by design.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11707) allow to configure the HDFS block size

2017-11-30 Thread Hendrik Haddorp (JIRA)
Hendrik Haddorp created SOLR-11707:
--

 Summary: allow to configure the HDFS block size
 Key: SOLR-11707
 URL: https://issues.apache.org/jira/browse/SOLR-11707
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: hdfs
Reporter: Hendrik Haddorp
Priority: Minor


Currently index files are created in HDFS with the block size that is defined 
on the namenode. For that the HdfsFileWriter reads out the config from the 
server and then specifies the size (and replication factor) in the 
FileSystem.create call.

For the write.lock files things work slightly different. These are being 
created by the HdfsLockFactory without specifying a block size (or replication 
factor). This results in a default being picked by the HDFS client, which is 
128MB.

So currently files are being created with different block sizes if the namenode 
is configured to something else then 128MB. It would be good if Solr would 
allow to configure the block size to be used. This is especially useful if the 
Solr admin is not the HDFS admin and if you have different applications using 
HDFS that have different requirements for their block size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11705) Java Class Cast Exception while loading custom plugin

2017-11-30 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272310#comment-16272310
 ] 

Amrit Sarkar commented on SOLR-11705:
-

Details?

> Java Class Cast Exception while loading custom plugin
> -
>
> Key: SOLR-11705
> URL: https://issues.apache.org/jira/browse/SOLR-11705
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.1
>Reporter: As Ma
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org