[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-05 Thread Marc Morissette (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427982#comment-16427982
 ] 

Marc Morissette commented on LUCENE-7976:
-

[~erickerickson] Thanks for tackling this.

Regarding singleton merges: if I read your code correctly and am right about 
how Lucene works, I think that, on a large enough collection, your patch could 
generate ~50% more reads/writes when re-indexing the whole collection:
 * I think new documents are typically flushed once and merged 2-3 times before 
ending up in a large segment.
 * With a 20% delete threshold, old documents would, on average, be singleton 
merged 4 times before being expunged vs only one merge at a 50% delete 
threshold. In Latex notation:

{code:java}
20% deleted docs threshold:
\sum_{n=1}^\infnty (1 - 0.2)^n = (1 / (1 - (1 - 0.2))) - 1 = 4

50% deleted docs threshold:
\sum_{n=1}^\infnty (1 - 0.5)^n = (1 / (1 - (1 - 0.5))) - 1 = 1{code}
On the odd chance that my math bears any resemblance to reality, I would 
suggest that you disable singleton merges when the short term deletion rate of 
a segment is above a certain threshold (say 0.5% per hour). This should prevent 
performance degradations during heavy re-indexation while maintaining the 
desired behaviour on seldom updated indexes.

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12133) TriggerIntegrationTest fails too easily.

2018-04-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12133.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

Based on latest [jenkins failure 
reports|http://fucit.org/solr-jenkins-reports/failure-report.html], this is 
fixed.

Thanks Mark!

> TriggerIntegrationTest fails too easily.
> 
>
> Key: SOLR-12133
> URL: https://issues.apache.org/jira/browse/SOLR-12133
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12133-testNodeMarkersRegistration.patch, 
> SOLR-12133.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11913) SolrParams ought to implement Iterable<Map.Entry<String,String[]>>

2018-04-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-11913:
---

Assignee: David Smiley

> SolrParams ought to implement Iterable>
> --
>
> Key: SOLR-11913
> URL: https://issues.apache.org/jira/browse/SOLR-11913
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11913.patch, SOLR-11913.patch, SOLR-11913.patch, 
> SOLR-11913_v2.patch
>
>
> SolrJ ought to implement {{Iterable>}} so that 
> it's easier to iterate on it, either using Java 5 for-each style, or Java 8 
> streams.  The implementation on ModifiableSolrParams can delegate through to 
> the underlying LinkedHashMap entry set.  The default impl can produce a 
> Map.Entry with a getValue that calls through to getParams.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 997 - Failure

2018-04-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/997/

No tests ran.

Build Log:
[...truncated 23735 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2190 links (1746 relative) to 3004 anchors in 243 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427892#comment-16427892
 ] 

Erick Erickson commented on LUCENE-7976:


One more thing. The current patch doesn't deal at all with the maxSegmentCount 
parameter to findForcedMerges. I'm thinking of deprecating it and having 
another method that takes the maxMergeSegmentSize(MB). I'll change the method 
name or something so when the method is removed anyone using it won't be 
trapped by the underlying method compiling but having a different meaning.

I'm not sure what use-case is served by specifying this anyway. We ignore it 
currently when we have max-sized segments.

I started looking at this and we already have maxSegments as a parameter to 
optimize and there's a really hacky way to use that (if it's not present on the 
command, set it to Integer.MAX_VALUE) and that's justugly. So changing that 
to maxMergeSegmentSizeMB seems cleaner.

Any objections?

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+5) - Build # 1653 - Unstable!

2018-04-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1653/
Java: 64bit/jdk-11-ea+5 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [InternalHttpClient, 
MockDirectoryWrapper, MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:289)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:298)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:248)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:290)  at 
org.apache.solr.handler.ReplicationHandler.inform(ReplicationHandler.java:1190) 
 at org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:696) 
 at org.apache.solr.core.SolrCore.(SolrCore.java:988)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:657)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1302)  at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:944) 
 at java.base/java.lang.Thread.run(Thread.java:841)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:762)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:955)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:864)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1047)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:643)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:192)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:841)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:526)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:369) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:420) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$12(ReplicationHandler.java:1159)
  at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
 at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:841)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1040)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:657)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1302)  at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:944) 
 at java.base/java.lang.Thread.run(Thread.java:841)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [InternalHttpClient, MockDirectoryWrapper, MockDirectoryWrapper, 
SolrCore]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:289)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:298)
  

[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 531 - Still Unstable!

2018-04-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/531/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ActionThrottleTest.testBasics

Error Message:
994ms

Stack Trace:
java.lang.AssertionError: 994ms
at 
__randomizedtesting.SeedInfo.seed([867D13851B849D8A:BBA5BDA9236AC3FA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.ActionThrottleTest.testBasics(ActionThrottleTest.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14013 lines...]
   [junit4] Suite: org.apache.solr.cloud.ActionThrottleTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.ActionThrottleTest_867D13851B849D8A-001\init-core-data-001
   [junit4]   2> 2331196 INFO  

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 30 - Unstable

2018-04-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/30/

3 tests failed.
FAILED:  
org.apache.solr.util.TestSolrCLIRunExample.testInteractiveSolrCloudExampleWithAutoScalingPolicy

Error Message:
After running Solr cloud example, test collection 'testCloudExamplePrompt1' not 
found in Solr at: http://localhost:60888/solr; tool output:  Welcome to the 
SolrCloud example!  This interactive session will help you launch a SolrCloud 
cluster on your local workstation. To begin, how many Solr nodes would you like 
to run in your local cluster? (specify 1-4 nodes) [2]:  Ok, let's start up 1 
Solr nodes for your example SolrCloud cluster. Please enter the port for node1 
[8983]:  Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J2/temp/solr.util.TestSolrCLIRunExample_9B66677E15147EC4-001/tempDir-001/cloud/node1/solr
  Starting up Solr on port 60888 using command: 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/bin/solr"
 start -cloud -p 60888 -s 
"temp/solr.util.TestSolrCLIRunExample_9B66677E15147EC4-001/tempDir-001/cloud/node1/solr"
  

Stack Trace:
java.lang.AssertionError: After running Solr cloud example, test collection 
'testCloudExamplePrompt1' not found in Solr at: http://localhost:60888/solr; 
tool output: 
Welcome to the SolrCloud example!

This interactive session will help you launch a SolrCloud cluster on your local 
workstation.
To begin, how many Solr nodes would you like to run in your local cluster? 
(specify 1-4 nodes) [2]: 
Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
Please enter the port for node1 [8983]: 
Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J2/temp/solr.util.TestSolrCLIRunExample_9B66677E15147EC4-001/tempDir-001/cloud/node1/solr

Starting up Solr on port 60888 using command:
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/bin/solr"
 start -cloud -p 60888 -s 
"temp/solr.util.TestSolrCLIRunExample_9B66677E15147EC4-001/tempDir-001/cloud/node1/solr"


at 
__randomizedtesting.SeedInfo.seed([9B66677E15147EC4:69CEDC716AA1E7E2]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.util.TestSolrCLIRunExample.testInteractiveSolrCloudExampleWithAutoScalingPolicy(TestSolrCLIRunExample.java:560)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[JENKINS-EA] Lucene-Solr-7.3-Linux (64bit/jdk-11-ea+5) - Build # 128 - Still Unstable!

2018-04-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.3-Linux/128/
Java: 64bit/jdk-11-ea+5 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:36943/il/yv/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:36943/il/yv/collection1
at 
__randomizedtesting.SeedInfo.seed([B6B01E25AB9ECEDB:3EE421FF0562A323]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:895)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:858)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:873)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:542)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1034)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427789#comment-16427789
 ] 

Erick Erickson commented on LUCENE-7976:


[~mikemccand] do you know the condition in TieredMergePolicy that 
segmentsToMerge is used? I'm looking at refactoring out the common code and 
conceptually, forcemerge and expunge deletes are the same thing now, they just 
operate on slightly different initial lists. But findForcedDeletesMerges 
doesn't have that parameter and findForcedMerges does.

I guess I'm fuzzy on why a segment would be in segmentsToMerge but not in 

writer.getMergingSegments()

this latter seems to be sufficient for detecting segments that are being merged 
in other cases...

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 544 - Failure!

2018-04-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/544/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 52835 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /var/tmp/ecj2039318398
 [ecj-lint] Compiling 20 source files to /var/tmp/ecj2039318398
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
 (at line 24)
 [ecj-lint] import org.junit.Ignore;
 [ecj-lint]
 [ecj-lint] The import org.junit.Ignore is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/build.xml:633: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/build.xml:101: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/build.xml:208: 
The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:2264:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:2095:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:2128:
 Compile failed; see the compiler error output for details.

Total time: 100 minutes 7 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (LUCENE-8155) Add Java 9 support to smoke tester

2018-04-05 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427718#comment-16427718
 ] 

Steve Rowe edited comment on LUCENE-8155 at 4/5/18 11:35 PM:
-

This can be resolved now, I think?


was (Author: steve_rowe):
This can be resolved no, I think?

> Add Java 9 support to smoke tester
> --
>
> Key: LUCENE-8155
> URL: https://issues.apache.org/jira/browse/LUCENE-8155
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-8155.patch
>
>
> After adding MR-JAR support with LUCENE-7966, we should test the release 
> candidates with Java 9. Therefore the already existing code in {{build.xml}} 
> that uses a separate environment variable to pass {{JAVA9_HOME}} should be 
> reenabled. This also requires reconfiguring Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1785 - Failure!

2018-04-05 Thread Karl Wright
Fix committed for the dangling Ignore import.
Karl

On Thu, Apr 5, 2018 at 6:09 PM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1785/
> Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC
>
> All tests passed
>
> Build Log:
> [...truncated 52745 lines...]
> -ecj-javadoc-lint-tests:
> [mkdir] Created dir: /var/tmp/ecj362896325
>  [ecj-lint] Compiling 20 source files to /var/tmp/ecj362896325
>  [ecj-lint] --
>  [ecj-lint] 1. ERROR in /export/home/jenkins/workspace/Lucene-Solr-master-
> Solaris/lucene/spatial3d/src/test/org/apache/lucene/
> spatial3d/geom/GeoPolygonTest.java (at line 24)
>  [ecj-lint] import org.junit.Ignore;
>  [ecj-lint]
>  [ecj-lint] The import org.junit.Ignore is never used
>  [ecj-lint] --
>  [ecj-lint] 1 problem (1 error)
>
> BUILD FAILED
> /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:633:
> The following error occurred while executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:101:
> The following error occurred while executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/build.xml:208:
> The following error occurred while executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-master-
> Solaris/lucene/common-build.xml:2264: The following error occurred while
> executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-master-
> Solaris/lucene/common-build.xml:2095: The following error occurred while
> executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-master-
> Solaris/lucene/common-build.xml:2128: Compile failed; see the compiler
> error output for details.
>
> Total time: 94 minutes 10 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Setting ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.
> Ant_AntInstallation/ANT_1.8.2
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> Setting ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.
> Ant_AntInstallation/ANT_1.8.2
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> Setting ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.
> Ant_AntInstallation/ANT_1.8.2
> Setting ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.
> Ant_AntInstallation/ANT_1.8.2
> Setting ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.
> Ant_AntInstallation/ANT_1.8.2
> Setting ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.
> Ant_AntInstallation/ANT_1.8.2
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


Re: [JENKINS] Lucene-Solr-Tests-7.x - Build # 549 - Failure

2018-04-05 Thread Karl Wright
Fix committed for the dangling Ignore import.
Karl

On Thu, Apr 5, 2018 at 6:39 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/549/
>
> All tests passed
>
> Build Log:
> [...truncated 52933 lines...]
> -ecj-javadoc-lint-tests:
> [mkdir] Created dir: /tmp/ecj1392289587
>  [ecj-lint] Compiling 20 source files to /tmp/ecj1392289587
>  [ecj-lint] --
>  [ecj-lint] 1. ERROR in /x1/jenkins/jenkins-slave/
> workspace/Lucene-Solr-Tests-7.x/lucene/spatial3d/src/test/
> org/apache/lucene/spatial3d/geom/GeoPolygonTest.java (at line 24)
>  [ecj-lint] import org.junit.Ignore;
>  [ecj-lint]
>  [ecj-lint] The import org.junit.Ignore is never used
>  [ecj-lint] --
>  [ecj-lint] 1 problem (1 error)
>
> BUILD FAILED
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:633:
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:101:
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build.xml:208:
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.
> x/lucene/common-build.xml:2264: The following error occurred while
> executing this line:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.
> x/lucene/common-build.xml:2095: The following error occurred while
> executing this line:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.
> x/lucene/common-build.xml:2128: Compile failed; see the compiler error
> output for details.
>
> Total time: 90 minutes 28 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


[jira] [Updated] (SOLR-12193) Move some log messages to TRACE level

2018-04-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12193:
---
Labels: newbie newdev  (was: )

> Move some log messages to TRACE level
> -
>
> Key: SOLR-12193
> URL: https://issues.apache.org/jira/browse/SOLR-12193
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Jan Høydahl
>Priority: Major
>  Labels: newbie, newdev
>
> One example of a wasteful DEBUG log which could be moved to TRACE level is:
> {noformat}
> $ solr start -f -v
> 2018-04-05 22:46:14.488 INFO  (main) [   ] o.a.s.c.SolrXmlConfig Loading 
> container configuration from /opt/solr/server/solr/solr.xml
> 2018-04-05 22:46:14.574 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/@coreLoadThreads
> 2018-04-05 22:46:14.577 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/@persistent
> 2018-04-05 22:46:14.579 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/@sharedLib
> 2018-04-05 22:46:14.581 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/@zkHost
> 2018-04-05 22:46:14.583 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/cores
> 2018-04-05 22:46:14.605 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/transientCoreCacheFactory
> 2018-04-05 22:46:14.609 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/metrics/suppliers/counter
> 2018-04-05 22:46:14.609 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/metrics/suppliers/meter
> 2018-04-05 22:46:14.611 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/metrics/suppliers/timer
> 2018-04-05 22:46:14.612 DEBUG (main) [   ] o.a.s.c.Config null missing 
> optional solr/metrics/suppliers/histogram
> 201
> {noformat}
> There are probably other examples as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2749) Co-occurrence filter

2018-04-05 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-2749:
---
Fix Version/s: (was: 6.0)
   (was: 4.9)

> Co-occurrence filter
> 
>
> Key: LUCENE-2749
> URL: https://issues.apache.org/jira/browse/LUCENE-2749
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 3.1, 4.0-ALPHA
>Reporter: Steve Rowe
>Priority: Minor
>
> The co-occurrence filter to be developed here will output sets of tokens that 
> co-occur within a given window onto a token stream.  
> These token sets can be ordered either lexically (to allow order-independent 
> matching/counting) or positionally (e.g. sliding windows of positionally 
> ordered co-occurring terms that include all terms in the window are called 
> n-grams or shingles). 
> The parameters to this filter will be: 
> * window size: this can be a fixed sequence length, sentence/paragraph 
> context (these will require sentence/paragraph segmentation, which is not in 
> Lucene yet), or over the entire token stream (full field width)
> * minimum number of co-occurring terms: >= 2
> * maximum number of co-occurring terms: <= window size
> * token set ordering (lexical or positional)
> One use case for co-occurring token sets is as candidates for collocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed

2018-04-05 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12187:

Attachment: SOLR-12187.patch

> Replica should watch clusterstate and unload itself if its entry is removed
> ---
>
> Key: SOLR-12187
> URL: https://issues.apache.org/jira/browse/SOLR-12187
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12187.patch
>
>
> With the introduction of autoscaling framework, we have seen an increase in 
> the number of issues related to the race condition between delete a replica 
> and other stuff.
> Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, 
> therefore, forcefully remove its entry from clusterstate, but the replica 
> still function normally and be able to become a leader -> SOLR-12176
> Case 2:
>  * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to 
> replica because the node is not live)
>  * The node start and the replica get loaded
>  * DELETECOREOP has not processed hence the replica still present in 
> clusterstate --> pass checkStateInZk
>  * DELETECOREOP is executed, DeleteReplicaCmd finished
>  ** result 1: the replica start recovering, finish it and publish itself as 
> ACTIVE --> state of the replica is ACTIVE
>  ** result 2: the replica throw an exception (probably: NPE) 
> --> state of the replica is DOWN, not join leader election



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12193) Move some log messages to TRACE level

2018-04-05 Thread JIRA
Jan Høydahl created SOLR-12193:
--

 Summary: Move some log messages to TRACE level
 Key: SOLR-12193
 URL: https://issues.apache.org/jira/browse/SOLR-12193
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Reporter: Jan Høydahl


One example of a wasteful DEBUG log which could be moved to TRACE level is:
{noformat}
$ solr start -f -v
2018-04-05 22:46:14.488 INFO  (main) [   ] o.a.s.c.SolrXmlConfig Loading 
container configuration from /opt/solr/server/solr/solr.xml
2018-04-05 22:46:14.574 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/@coreLoadThreads
2018-04-05 22:46:14.577 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/@persistent
2018-04-05 22:46:14.579 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/@sharedLib
2018-04-05 22:46:14.581 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/@zkHost
2018-04-05 22:46:14.583 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/cores
2018-04-05 22:46:14.605 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/transientCoreCacheFactory
2018-04-05 22:46:14.609 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/metrics/suppliers/counter
2018-04-05 22:46:14.609 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/metrics/suppliers/meter
2018-04-05 22:46:14.611 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/metrics/suppliers/timer
2018-04-05 22:46:14.612 DEBUG (main) [   ] o.a.s.c.Config null missing optional 
solr/metrics/suppliers/histogram
201
{noformat}
There are probably other examples as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8155) Add Java 9 support to smoke tester

2018-04-05 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427718#comment-16427718
 ] 

Steve Rowe commented on LUCENE-8155:


This can be resolved no, I think?

> Add Java 9 support to smoke tester
> --
>
> Key: LUCENE-8155
> URL: https://issues.apache.org/jira/browse/LUCENE-8155
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-8155.patch
>
>
> After adding MR-JAR support with LUCENE-7966, we should test the release 
> candidates with Java 9. Therefore the already existing code in {{build.xml}} 
> that uses a separate environment variable to pass {{JAVA9_HOME}} should be 
> reenabled. This also requires reconfiguring Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12192) Error when ulimit is unlimited

2018-04-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12192:
---
Affects Version/s: 7.3

> Error when ulimit is unlimited
> --
>
> Key: SOLR-12192
> URL: https://issues.apache.org/jira/browse/SOLR-12192
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI
>Affects Versions: 7.3
>Reporter: Martijn Koster
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
>
> I noticed:
> {noformat}
> solr@fd8031538f4b:/opt/solr$  ulimit -u
> unlimited
> solr@fd8031538f4b:/opt/solr$  bin/solr
> /opt/solr/bin/solr: line 1452: [: unlimited: integer expression expected
> {noformat}
> The solr start script should check for "unlimited" and not print that error.
> Patch on https://github.com/apache/lucene-solr/pull/352



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12192) Error when ulimit is unlimited

2018-04-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427703#comment-16427703
 ] 

Jan Høydahl commented on SOLR-12192:


Thanks. If you mention SOLR-12192 in the title of your GitHub PR, then it will 
be properly linked with this JIRA. Can you please also put in a line in 
CHANGES.txt under "bug fixes" for 7.4?

> Error when ulimit is unlimited
> --
>
> Key: SOLR-12192
> URL: https://issues.apache.org/jira/browse/SOLR-12192
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI
>Affects Versions: 7.3
>Reporter: Martijn Koster
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
>
> I noticed:
> {noformat}
> solr@fd8031538f4b:/opt/solr$  ulimit -u
> unlimited
> solr@fd8031538f4b:/opt/solr$  bin/solr
> /opt/solr/bin/solr: line 1452: [: unlimited: integer expression expected
> {noformat}
> The solr start script should check for "unlimited" and not print that error.
> Patch on https://github.com/apache/lucene-solr/pull/352



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12192) Error when ulimit is unlimited

2018-04-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12192:
---
Fix Version/s: master (8.0)
   7.4

> Error when ulimit is unlimited
> --
>
> Key: SOLR-12192
> URL: https://issues.apache.org/jira/browse/SOLR-12192
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI
>Affects Versions: 7.3
>Reporter: Martijn Koster
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
>
> I noticed:
> {noformat}
> solr@fd8031538f4b:/opt/solr$  ulimit -u
> unlimited
> solr@fd8031538f4b:/opt/solr$  bin/solr
> /opt/solr/bin/solr: line 1452: [: unlimited: integer expression expected
> {noformat}
> The solr start script should check for "unlimited" and not print that error.
> Patch on https://github.com/apache/lucene-solr/pull/352



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 549 - Failure

2018-04-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/549/

All tests passed

Build Log:
[...truncated 52933 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1392289587
 [ecj-lint] Compiling 20 source files to /tmp/ecj1392289587
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
 (at line 24)
 [ecj-lint] import org.junit.Ignore;
 [ecj-lint]
 [ecj-lint] The import org.junit.Ignore is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:633: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:101: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build.xml:208: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2264:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2095:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2128:
 Compile failed; see the compiler error output for details.

Total time: 90 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1522 - Still Unstable

2018-04-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1522/

2 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateWithoutCoresThenDelete

Error Message:
Could not find collection:testSolrCloudCollectionWithoutCores

Stack Trace:
java.lang.AssertionError: Could not find 
collection:testSolrCloudCollectionWithoutCores
at 
__randomizedtesting.SeedInfo.seed([8E7B76D113D81FC2:3DBEB9101CA0CC83]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.createCollection(TestCollectionsAPIViaSolrCloudCluster.java:93)
at 
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateWithoutCoresThenDelete(TestCollectionsAPIViaSolrCloudCluster.java:184)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1785 - Failure!

2018-04-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1785/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 52745 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /var/tmp/ecj362896325
 [ecj-lint] Compiling 20 source files to /var/tmp/ecj362896325
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
 (at line 24)
 [ecj-lint] import org.junit.Ignore;
 [ecj-lint]
 [ecj-lint] The import org.junit.Ignore is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:633: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:101: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/build.xml:208: 
The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/common-build.xml:2264:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/common-build.xml:2095:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/common-build.xml:2128:
 Compile failed; see the compiler error output for details.

Total time: 94 minutes 10 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-12192) Error when ulimit is unlimited

2018-04-05 Thread Martijn Koster (JIRA)
Martijn Koster created SOLR-12192:
-

 Summary: Error when ulimit is unlimited
 Key: SOLR-12192
 URL: https://issues.apache.org/jira/browse/SOLR-12192
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCLI
Reporter: Martijn Koster


I noticed:
{noformat}
solr@fd8031538f4b:/opt/solr$  ulimit -u
unlimited

solr@fd8031538f4b:/opt/solr$  bin/solr
/opt/solr/bin/solr: line 1452: [: unlimited: integer expression expected
{noformat}
The solr start script should check for "unlimited" and not print that error.

Patch on https://github.com/apache/lucene-solr/pull/352



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #352: deal with ulimit being unlimited

2018-04-05 Thread makuk66
GitHub user makuk66 opened a pull request:

https://github.com/apache/lucene-solr/pull/352

deal with ulimit being unlimited

I noticed:
```
solr@fd8031538f4b:/opt/solr$  ulimit -u
unlimited

solr@fd8031538f4b:/opt/solr$  bin/solr
/opt/solr/bin/solr: line 1452: [: unlimited: integer expression expected
```

The solr start script should check for "unlimited".

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/makuk66/lucene-solr-1 mak-unlimited

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/352.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #352


commit 23b02c6e20b6e6f8336fe7a6502c9bad40732d69
Author: Martijn Koster 
Date:   2018-04-05T21:54:13Z

deal with ulimit being unlimited




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427642#comment-16427642
 ] 

Erick Erickson commented on LUCENE-7976:


This is coming together, here's a preliminary patch. It has nocommits and 
several rough spots/hard-coded numbers, code commented out etc.

I'm putting it up in case anyone interested in this wants to take a look at the 
_approach_ and poke holes in it. Please raise any concerns but also please 
don't spend a lot of time on the details before I wrap up things I _know_ will 
need addressing.

Current state:

0> refactors a bit of findMerges to gather the stats into a separate class as 
that method was getting quite hard follow. I haven't made use of that new class 
in forceMerge or expungeDeletes yet.

1> forceMerge and expungeDeletes respect maxMergedSegmentSizeMB

2> regular merging will do "singleton merges" on overly-large segments when 
they're more than 20% deleted docs. 20% is completely arbitrary, don't quite 
know the correct numbers yet. That handles the case of a single-segment 
optimize not getting merged away for a long time.

3> forceMerge will purge all deleted docs. It tries to assemble max-sized 
segments. Any segments where the live docs are larger than 
maxMergedSegmentSizeMB get a singleton merge.

4> fixes the annoying bit where segments reported on the admin UI are 
improperly proportioned

5> expungeDeletes now tries to assemble max sized segments from all segments 
with > 10% deleted docs. If a segment has > 10% deleted docs _and_ it's 
liveDocs > maxMergedSegmentSizeMB it gets a singleton merge.

What's left to do:

1> more rigorous testing. So far I've just been looking at the admin UI 
segments screen and saying "that looks about right".

2> Normal merging rewrites the largest segment too often until it gets to max 
segment size. I think it also merges dissimilar-sized segments too often.

3> compare the total number of bytes written for one of my test runs between 
the old and new versions. I'm sure this does more writing, just not sure how 
much.

4> allow forceMerge to merge down to one segment without having to change 
solrconfig.xml.

5> perhaps refactor, findMerges, forceMerge and findForcedDeletesMerges to make 
use of common code.

6> 

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We 

[jira] [Updated] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-05 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7976:
---
Attachment: LUCENE-7976.patch

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11865) Refactor QueryElevationComponent to prepare query subset matching

2018-04-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427637#comment-16427637
 ] 

David Smiley commented on SOLR-11865:
-

This is looking very good Bruno.  You even hid InitializationException.  I 
think I can take it from here but would like some input.  I can throw up 
another patch.  I've already tweaked some formatting locally (e.g. bad or wrong 
indentation).
* Why make it possible for a subclass to change the _default_ values of 
settings?  I'm looking at the getters you added referring to some constants and 
it seems needless.  These are just defaults after all; can't you simply be 
explicit if the default doesn't suit you?
* RE keepElevationPriority, I saw you added it to QueryElevationParams but 
realized you're not actually using it as a _parameter_, you're using it as a 
config file setting name (which we don't call parameters in Solr as it's 
ambiguous).  Therefore it goes to a constant in QEC.
** I want to ensure I understand this setting better.  I did read the docs you 
put on the constant definition.  So it requires forceElevation.  If this is 
configured to false, will the sort order of the elevated documents be not only 
at the top but then sorted by the sort parameter coming into Solr?  And if true 
it's in config-order?  Maybe this could be named forceElevationWithConfigOrder? 
 This way it's name suggests a more clear relationship with forceElevation.

BTW I'm going to add a bit of docs to the ref guide (file 
{{the-query-elevation-component.adoc}}) here.  

> Refactor QueryElevationComponent to prepare query subset matching
> -
>
> Key: SOLR-11865
> URL: https://issues.apache.org/jira/browse/SOLR-11865
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: master (8.0)
>Reporter: Bruno Roustant
>Priority: Minor
>  Labels: QueryComponent
> Fix For: master (8.0)
>
> Attachments: 
> 0001-Refactor-QueryElevationComponent-to-introduce-Elevat.patch, 
> 0002-Refactor-QueryElevationComponent-after-review.patch, 
> 0003-Remove-exception-handlers-and-refactor-getBoostDocs.patch, 
> SOLR-11865.patch
>
>
> The goal is to prepare a second improvement to support query terms subset 
> matching or query elevation rules.
> Before that, we need to refactor the QueryElevationComponent. We make it 
> extendible. We introduce the ElevationProvider interface which will be 
> implemented later in a second patch to support subset matching. The current 
> full-query match policy becomes a default simple MapElevationProvider.
> - Add overridable methods to handle exceptions during the component 
> initialization.
> - Add overridable methods to provide the default values for config properties.
> - No functional change beyond refactoring.
> - Adapt unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12139) Support "eq" function for string fields

2018-04-05 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427592#comment-16427592
 ] 

Lucene/Solr QA commented on SOLR-12139:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 10s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.ZkControllerTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917749/SOLR-12139.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 9009fe6 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_152 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/39/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/39/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/39/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Support "eq" function for string fields
> ---
>
> Key: SOLR-12139
> URL: https://issues.apache.org/jira/browse/SOLR-12139
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Andrey Kudryavtsev
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, 
> SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, 
> SOLR-12139.patch
>
>
> I just discovered that {{eq}} user function will work for numeric fields only.
> For string types it results in {{java.lang.UnsupportedOperationException}}
> What do you think if we will extend it to support at least some of string 
> types as well?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10513) CLONE - ConjunctionSolrSpellChecker wrong check for same string distance

2018-04-05 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427580#comment-16427580
 ] 

James Dyer commented on SOLR-10513:
---

[~sarkaramr...@gmail.com], will just checking for the StringDistance properly 
solve your problem?  If so, I think we should limit to this.  Beyond this 
simple fix, we likely need to re-think how we configure CSSC as suggested by 
[~varunthacker].  CSSC was put here to allow you to use 
WordBreakSolrSpellChecker with another spell checker, and as WBSSC does not use 
its own Analyzer, these checks are moot.  But I can see the use of expanding 
this to let you have any combinations of spell checkers, its just not robust 
enough to handle that as it exists now.

> CLONE - ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-10513
> URL: https://issues.apache.org/jira/browse/SOLR-10513
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Abhishek Kumar Singh
>Assignee: James Dyer
>Priority: Major
> Fix For: 5.5
>
> Attachments: SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, 
> SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.
> *Update:* As of Solr 6.5, this has been changed to 
> *stringDistance.equals(checker.getStringDistance())* .
> However, *LuceneLevenshteinDistance* does not even override equals method. 
> This does not solve the problem yet, because the *default equals* method 
> anyway compares references.
> Hence unable to use *FileBasedSolrSpellChecker* .  
> Moreover, Some check of similar sorts should also be in the init method. So 
> that user does not have to wait for this error during query time. If the 
> spellcheck components have been added *solrconfig.xml* , it should throw 
> error during core-reload itself.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.3-Linux (64bit/jdk1.8.0_162) - Build # 127 - Unstable!

2018-04-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.3-Linux/127/
Java: 64bit/jdk1.8.0_162 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testGammaDistribution

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E67DDB098670AFAE:DB07F0A7A50805B9]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testGammaDistribution(StreamExpressionTest.java:8650)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testDistributions

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E67DDB098670AFAE:59829AA3588A4F32]:0)
at 

[jira] [Commented] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427571#comment-16427571
 ] 

ASF subversion and git services commented on LUCENE-8239:
-

Commit aba793def66628407f18979ff7c079e638724e97 in lucene-solr's branch 
refs/heads/master from broustant
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aba793d ]

LUCENE-8239: remove unused import of @Ignore


> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Fix For: 6.7, 7.4, master (8.0)
>
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-05 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7976:
---
Summary: Make TieredMergePolicy respect maxSegmentSizeMB and allow 
singleton merges of very large segments  (was: Add a parameter to 
TieredMergePolicy to merge segments that have more than X percent deleted 
documents)

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We should be using the "Other Changes" section of CHANGES.txt more sparingly

2018-04-05 Thread Erick Erickson
I'd like people to get credit for test fixes, whether in a new section
or not. Cleaning up tests is enough of a thankless task as it stands,
some recognition for attending to that is in order ;) CHANGES.txt is a
good place for that. Besides, having a section of that may encourage
people to take fixing tests more seriously.

On Thu, Apr 5, 2018 at 12:43 PM, Anshum Gupta  wrote:
> I agree with Hoss about the fact that we should try and put things in better
> suited section. 'Other changes' feels like an easy way out to just put
> everything, but it makes it really difficult for end users/developers to
> look at a release and find changes that might be of interest to them.
>
> I also think users are concerned/bothered about bug fixes to tests, and
> considering we have a reasonable number of commits in just that category, it
> calls for it's own section. It doesn't make any thing more confusing or hard
> to maintain, instead only makes it easier for everyone looking at the
> CHANGES.txt.
>
>  Anshum
>
>
>
>
> On Apr 5, 2018, at 12:21 PM, Jason Gerlowski  wrote:
>
> To toss my two cents in, I agree with Hoss's point generally.  Burying
> important things that users may care about in "Other Changes" makes
> them harder to discover, and we should start double-checking ourselves
> on that.
>
> But as for test-fix changes specifically, if the main purpose of
> CHANGES.txt is to:
>
> be able to understand at a glance what important changes tye may care about
>
>
> then I'm not sure test-fixes should be in CHANGES.txt at all.  Very
> few users are going to care about test bug fixes when evaluating
> what's new in a Solr, or what they'll need to do to upgrade.  The
> added noise probably makes it harder for users to identify which
> changes actually matter to them.
>
> Best,
>
> Jason
>
> On Thu, Apr 5, 2018 at 2:56 PM, Shawn Heisey  wrote:
>
> On 4/5/2018 12:38 PM, David Smiley wrote:
>
> This issues you listed are gray areas; I won't debate each with you.
> I respect your opinion.  I just don't see the value of a section for
> *test* bug fixes.  A user wants to know about the improvements,
> features, and bug fixes (to a running Solr instance).  Everything else
> is just not as interesting to a user so goes in other, even though
> technically it's a bug fix (to a test).
>
>
> I see two viable solutions.  One is a completely separate CHANGES file
> for dev/test issues, the other is a new section in the existing file, so
> that Other Changes isn't overrun as Hoss has noticed.
>
> It's my opinion, which I think aligns with what Hoss is saying, that the
> fact that Other Changes is getting so much use (abuse?) is an indication
> of one of two things, and quite possibly both:
>
> 1) The sections we have are insufficient for proper classification.
> 2) We aren't putting issues in the right section.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-04-05 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427530#comment-16427530
 ] 

Mikhail Khludnev commented on SOLR-12155:
-

Reasonable! [^SOLR-12155.patch] became a way better. I also included (and fixed 
{{checkUnInvertedField()}}).

> Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField() 
> 
>
> Key: SOLR-12155
> URL: https://issues.apache.org/jira/browse/SOLR-12155
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Kishor gandham
>Priority: Major
> Attachments: SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, 
> SOLR-12155.patch, stack.txt
>
>
> I am attaching a stack trace from our production Solr (7.2.1). Occasionally, 
> we are seeing SOLR becoming unresponsive. We are then forced to kill the JVM 
> and start solr again.
> We have a lot of facet queries and our index has approximately 15 million 
> documents. We have recently started using json.facet queries and some of the 
> facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-04-05 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12155:

Attachment: SOLR-12155.patch

> Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField() 
> 
>
> Key: SOLR-12155
> URL: https://issues.apache.org/jira/browse/SOLR-12155
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Kishor gandham
>Priority: Major
> Attachments: SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, 
> SOLR-12155.patch, stack.txt
>
>
> I am attaching a stack trace from our production Solr (7.2.1). Occasionally, 
> we are seeing SOLR becoming unresponsive. We are then forced to kill the JVM 
> and start solr again.
> We have a lot of facet queries and our index has approximately 15 million 
> documents. We have recently started using json.facet queries and some of the 
> facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-04-05 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427511#comment-16427511
 ] 

Houston Putman commented on SOLR-11982:
---

[~anshumg], what do you see this option being extended to in the future beyond 
request routing? I agree keeping it generic would be good if the scope were to 
expand in the future, I just can't think of any other feature that it would 
make sense for this option to govern.

> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10513) CLONE - ConjunctionSolrSpellChecker wrong check for same string distance

2018-04-05 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427507#comment-16427507
 ] 

Amrit Sarkar commented on SOLR-10513:
-

[~jdyer], thank you for the updated one. In the prior patches I have included 
the included you have with the latest, along with verifying {{Analyzer}} and 
{{Accuracy}}. I also validate the correctness of CSSC in 
{{SpellCheckCollatorTest}}. If you think we should open a new jira to add those 
correctness-tests, let me know I will do the same.

> CLONE - ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-10513
> URL: https://issues.apache.org/jira/browse/SOLR-10513
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Abhishek Kumar Singh
>Assignee: James Dyer
>Priority: Major
> Fix For: 5.5
>
> Attachments: SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, 
> SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.
> *Update:* As of Solr 6.5, this has been changed to 
> *stringDistance.equals(checker.getStringDistance())* .
> However, *LuceneLevenshteinDistance* does not even override equals method. 
> This does not solve the problem yet, because the *default equals* method 
> anyway compares references.
> Hence unable to use *FileBasedSolrSpellChecker* .  
> Moreover, Some check of similar sorts should also be in the init method. So 
> that user does not have to wait for this error during query time. If the 
> spellcheck components have been added *solrconfig.xml* , it should throw 
> error during core-reload itself.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2473 - Failure

2018-04-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2473/

All tests passed

Build Log:
[...truncated 52810 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1297188283
 [ecj-lint] Compiling 20 source files to /tmp/ecj1297188283
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
 (at line 24)
 [ecj-lint] import org.junit.Ignore;
 [ecj-lint]
 [ecj-lint] The import org.junit.Ignore is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:633: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build.xml:208:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2264:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2095:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2128:
 Compile failed; see the compiler error output for details.

Total time: 87 minutes 48 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10513) CLONE - ConjunctionSolrSpellChecker wrong check for same string distance

2018-04-05 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427495#comment-16427495
 ] 

James Dyer commented on SOLR-10513:
---

See my version of the patch.  This also adds an "equals" method to 
LuceneLevenshteinDistance and improves the test to check that known 
StringDistance's implement "equals" and behave properly with CSSC.  This is a 
bit less of a change than prior patches.  If this is deemed adequate, I can 
commit this soon.

> CLONE - ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-10513
> URL: https://issues.apache.org/jira/browse/SOLR-10513
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Abhishek Kumar Singh
>Assignee: James Dyer
>Priority: Major
> Fix For: 5.5
>
> Attachments: SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, 
> SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.
> *Update:* As of Solr 6.5, this has been changed to 
> *stringDistance.equals(checker.getStringDistance())* .
> However, *LuceneLevenshteinDistance* does not even override equals method. 
> This does not solve the problem yet, because the *default equals* method 
> anyway compares references.
> Hence unable to use *FileBasedSolrSpellChecker* .  
> Moreover, Some check of similar sorts should also be in the init method. So 
> that user does not have to wait for this error during query time. If the 
> spellcheck components have been added *solrconfig.xml* , it should throw 
> error during core-reload itself.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We should be using the "Other Changes" section of CHANGES.txt more sparingly

2018-04-05 Thread Anshum Gupta
I agree with Hoss about the fact that we should try and put things in better 
suited section. 'Other changes' feels like an easy way out to just put 
everything, but it makes it really difficult for end users/developers to look 
at a release and find changes that might be of interest to them.

I also think users are concerned/bothered about bug fixes to tests, and 
considering we have a reasonable number of commits in just that category, it 
calls for it's own section. It doesn't make any thing more confusing or hard to 
maintain, instead only makes it easier for everyone looking at the CHANGES.txt.

 Anshum




> On Apr 5, 2018, at 12:21 PM, Jason Gerlowski  wrote:
> 
> To toss my two cents in, I agree with Hoss's point generally.  Burying
> important things that users may care about in "Other Changes" makes
> them harder to discover, and we should start double-checking ourselves
> on that.
> 
> But as for test-fix changes specifically, if the main purpose of
> CHANGES.txt is to:
> 
>> be able to understand at a glance what important changes tye may care about
> 
> then I'm not sure test-fixes should be in CHANGES.txt at all.  Very
> few users are going to care about test bug fixes when evaluating
> what's new in a Solr, or what they'll need to do to upgrade.  The
> added noise probably makes it harder for users to identify which
> changes actually matter to them.
> 
> Best,
> 
> Jason
> 
> On Thu, Apr 5, 2018 at 2:56 PM, Shawn Heisey  wrote:
>> On 4/5/2018 12:38 PM, David Smiley wrote:
>>> This issues you listed are gray areas; I won't debate each with you.
>>> I respect your opinion.  I just don't see the value of a section for
>>> *test* bug fixes.  A user wants to know about the improvements,
>>> features, and bug fixes (to a running Solr instance).  Everything else
>>> is just not as interesting to a user so goes in other, even though
>>> technically it's a bug fix (to a test).
>> 
>> I see two viable solutions.  One is a completely separate CHANGES file
>> for dev/test issues, the other is a new section in the existing file, so
>> that Other Changes isn't overrun as Hoss has noticed.
>> 
>> It's my opinion, which I think aligns with what Hoss is saying, that the
>> fact that Other Changes is getting so much use (abuse?) is an indication
>> of one of two things, and quite possibly both:
>> 
>> 1) The sections we have are insufficient for proper classification.
>> 2) We aren't putting issues in the right section.
>> 
>> Thanks,
>> Shawn
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



signature.asc
Description: Message signed with OpenPGP


[jira] [Updated] (SOLR-10513) CLONE - ConjunctionSolrSpellChecker wrong check for same string distance

2018-04-05 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-10513:
--
Attachment: SOLR-10513.patch

> CLONE - ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-10513
> URL: https://issues.apache.org/jira/browse/SOLR-10513
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Abhishek Kumar Singh
>Assignee: James Dyer
>Priority: Major
> Fix For: 5.5
>
> Attachments: SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, 
> SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.
> *Update:* As of Solr 6.5, this has been changed to 
> *stringDistance.equals(checker.getStringDistance())* .
> However, *LuceneLevenshteinDistance* does not even override equals method. 
> This does not solve the problem yet, because the *default equals* method 
> anyway compares references.
> Hence unable to use *FileBasedSolrSpellChecker* .  
> Moreover, Some check of similar sorts should also be in the init method. So 
> that user does not have to wait for this error during query time. If the 
> spellcheck components have been added *solrconfig.xml* , it should throw 
> error during core-reload itself.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+5) - Build # 21760 - Unstable!

2018-04-05 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-9562) Minimize queried collections for time series alias

2018-04-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427486#comment-16427486
 ] 

David Smiley commented on SOLR-9562:


Hello [~moshebla]. 
Yes, all the settings are now in "alias properties" (originally called alias 
metadata).  There's also a TimeRoutedAlias class in solr-core that parses TRA 
properties and includes logic to parse the timestamps from the collection 
names.  Doing the distributed search at the client is impossible; all of Solr's 
distributed search logic (e.g. merging shard results) is in solr-core.  We very 
likely ought to have dedicated request parameters to specify the time range 
since reverse engineering the time range from filter queries 'n such will be 
brittle and problematic (e.g. consider facets that exclude filters).  I haven't 
put much thought into this side of things yet; I'm still focused on /update 
routing efficiency & hardening.  We won't get in each other's way should you 
want to pursue this.

> Minimize queried collections for time series alias
> --
>
> Key: SOLR-9562
> URL: https://issues.apache.org/jira/browse/SOLR-9562
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Eungsop Yoo
>Priority: Minor
> Attachments: SOLR-9562-v2.patch, SOLR-9562.patch
>
>
> For indexing time series data(such as large log data), we can create a new 
> collection regularly(hourly, daily, etc.) with a write alias and create a 
> read alias for all of those collections. But all of the collections of the 
> read alias are queried even if we search over very narrow time window. In 
> this case, the docs to be queried may be stored in very small portion of 
> collections. So we don't need to do that.
> I suggest this patch for read alias to minimize queried collections. Three 
> parameters for CREATEALIAS action are added.
> || Key || Type || Required || Default || Description ||
> | timeField | string | No | | The time field name for time series data. It 
> should be date type. |
> | dateTimeFormat | string | No | | The format of timestamp for collection 
> creation. Every collection should has a suffix(start with "_") with this 
> format. 
> Ex. dateTimeFormat: MMdd, collectionName: col_20160927
> See 
> [DateTimeFormatter|https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html].
>  |
> | timeZone | string | No | | The time zone information for dateTimeFormat 
> parameter.
> Ex. GMT+9. 
> See 
> [DateTimeFormatter|https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html].
>  |
> And then when we query with filter query like this "timeField:\[fromTime TO 
> toTime\]", only the collections have the docs for a given time range will be 
> queried.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We should be using the "Other Changes" section of CHANGES.txt more sparingly

2018-04-05 Thread Jason Gerlowski
To toss my two cents in, I agree with Hoss's point generally.  Burying
important things that users may care about in "Other Changes" makes
them harder to discover, and we should start double-checking ourselves
on that.

But as for test-fix changes specifically, if the main purpose of
CHANGES.txt is to:

> be able to understand at a glance what important changes tye may care about

then I'm not sure test-fixes should be in CHANGES.txt at all.  Very
few users are going to care about test bug fixes when evaluating
what's new in a Solr, or what they'll need to do to upgrade.  The
added noise probably makes it harder for users to identify which
changes actually matter to them.

Best,

Jason

On Thu, Apr 5, 2018 at 2:56 PM, Shawn Heisey  wrote:
> On 4/5/2018 12:38 PM, David Smiley wrote:
>> This issues you listed are gray areas; I won't debate each with you.
>> I respect your opinion.  I just don't see the value of a section for
>> *test* bug fixes.  A user wants to know about the improvements,
>> features, and bug fixes (to a running Solr instance).  Everything else
>> is just not as interesting to a user so goes in other, even though
>> technically it's a bug fix (to a test).
>
> I see two viable solutions.  One is a completely separate CHANGES file
> for dev/test issues, the other is a new section in the existing file, so
> that Other Changes isn't overrun as Hoss has noticed.
>
> It's my opinion, which I think aligns with what Hoss is saying, that the
> fact that Other Changes is getting so much use (abuse?) is an indication
> of one of two things, and quite possibly both:
>
> 1) The sections we have are insufficient for proper classification.
> 2) We aren't putting issues in the right section.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-04-05 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427448#comment-16427448
 ] 

Anshum Gupta commented on SOLR-11982:
-

Not trying to convert this to a bikeshed but I much prefer the 
{{shards.preference}} parameter.

This is inline with everything else and is much more open to being extended in 
the future.

Also, thanks for not going with the {{|}} as that to me would mean an {{OR}} 
and not a preferred order.

> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We should be using the "Other Changes" section of CHANGES.txt more sparingly

2018-04-05 Thread Shawn Heisey
On 4/5/2018 12:38 PM, David Smiley wrote:
> This issues you listed are gray areas; I won't debate each with you. 
> I respect your opinion.  I just don't see the value of a section for
> *test* bug fixes.  A user wants to know about the improvements,
> features, and bug fixes (to a running Solr instance).  Everything else
> is just not as interesting to a user so goes in other, even though
> technically it's a bug fix (to a test).

I see two viable solutions.  One is a completely separate CHANGES file
for dev/test issues, the other is a new section in the existing file, so
that Other Changes isn't overrun as Hoss has noticed.

It's my opinion, which I think aligns with what Hoss is saying, that the
fact that Other Changes is getting so much use (abuse?) is an indication
of one of two things, and quite possibly both:

1) The sections we have are insufficient for proper classification.
2) We aren't putting issues in the right section.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10616) use more ant variables in ref guide pages: particular for javadoc & third-party lib versions

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427425#comment-16427425
 ] 

ASF subversion and git services commented on SOLR-10616:


Commit 6032d6011cedc14ddf2370401cfbd87488ef2b3b in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6032d60 ]

SOLR-10616: add 'java-javadocs' as a variable in the ref-guide, and cleanup 
some overly specific mentions of 'Java 8'

Continuation of SOLR-12118

(cherry picked from commit 9009fe6378c8f3fe1757ef744114c3e558919a68)


> use more ant variables in ref guide pages: particular for javadoc & 
> third-party lib versions
> 
>
> Key: SOLR-10616
> URL: https://issues.apache.org/jira/browse/SOLR-10616
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-10616.patch
>
>
> we already use ant variables for the lucene/solr version when building 
> lucene/solr javadoc links, but it would be nice if we could slurp in the JDK 
> javadoc URLs for the current java version & the versions.properties values 
> for all third-party deps as well, so that links to things like the zookeeper 
> guide, or the tika guide, or the javadocs for DateInstance would always be 
> "current"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10616) use more ant variables in ref guide pages: particular for javadoc & third-party lib versions

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427427#comment-16427427
 ] 

ASF subversion and git services commented on SOLR-10616:


Commit 9009fe6378c8f3fe1757ef744114c3e558919a68 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9009fe6 ]

SOLR-10616: add 'java-javadocs' as a variable in the ref-guide, and cleanup 
some overly specific mentions of 'Java 8'

Continuation of SOLR-12118


> use more ant variables in ref guide pages: particular for javadoc & 
> third-party lib versions
> 
>
> Key: SOLR-10616
> URL: https://issues.apache.org/jira/browse/SOLR-10616
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-10616.patch
>
>
> we already use ant variables for the lucene/solr version when building 
> lucene/solr javadoc links, but it would be nice if we could slurp in the JDK 
> javadoc URLs for the current java version & the versions.properties values 
> for all third-party deps as well, so that links to things like the zookeeper 
> guide, or the tika guide, or the javadocs for DateInstance would always be 
> "current"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12118) use ivy-versions.properties values as attributes in ref-guide files to replace hard coded version numbers

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427426#comment-16427426
 ] 

ASF subversion and git services commented on SOLR-12118:


Commit 6032d6011cedc14ddf2370401cfbd87488ef2b3b in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6032d60 ]

SOLR-10616: add 'java-javadocs' as a variable in the ref-guide, and cleanup 
some overly specific mentions of 'Java 8'

Continuation of SOLR-12118

(cherry picked from commit 9009fe6378c8f3fe1757ef744114c3e558919a68)


> use ivy-versions.properties values as attributes in ref-guide files to 
> replace hard coded version numbers
> -
>
> Key: SOLR-12118
> URL: https://issues.apache.org/jira/browse/SOLR-12118
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12118.patch
>
>
> There's currently a bunch of places in the ref guide where we mention third 
> party libraries and refer to hard coded version numbers - many of which are 
> not consistent with the versions of those libraries actually in use because 
> it's easy to overlook them.
> We should improve the ref-guide build files to pull in the 
> {{ivy-version.properties}} variables to use as attributes in the source files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10616) use more ant variables in ref guide pages: particular for javadoc & third-party lib versions

2018-04-05 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-10616.
-
   Resolution: Fixed
 Assignee: Hoss Man
Fix Version/s: master (8.0)
   7.4

> use more ant variables in ref guide pages: particular for javadoc & 
> third-party lib versions
> 
>
> Key: SOLR-10616
> URL: https://issues.apache.org/jira/browse/SOLR-10616
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-10616.patch
>
>
> we already use ant variables for the lucene/solr version when building 
> lucene/solr javadoc links, but it would be nice if we could slurp in the JDK 
> javadoc URLs for the current java version & the versions.properties values 
> for all third-party deps as well, so that links to things like the zookeeper 
> guide, or the tika guide, or the javadocs for DateInstance would always be 
> "current"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12118) use ivy-versions.properties values as attributes in ref-guide files to replace hard coded version numbers

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427428#comment-16427428
 ] 

ASF subversion and git services commented on SOLR-12118:


Commit 9009fe6378c8f3fe1757ef744114c3e558919a68 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9009fe6 ]

SOLR-10616: add 'java-javadocs' as a variable in the ref-guide, and cleanup 
some overly specific mentions of 'Java 8'

Continuation of SOLR-12118


> use ivy-versions.properties values as attributes in ref-guide files to 
> replace hard coded version numbers
> -
>
> Key: SOLR-12118
> URL: https://issues.apache.org/jira/browse/SOLR-12118
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12118.patch
>
>
> There's currently a bunch of places in the ref guide where we mention third 
> party libraries and refer to hard coded version numbers - many of which are 
> not consistent with the versions of those libraries actually in use because 
> it's easy to overlook them.
> We should improve the ref-guide build files to pull in the 
> {{ivy-version.properties}} variables to use as attributes in the source files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We should be using the "Other Changes" section of CHANGES.txt more sparingly

2018-04-05 Thread David Smiley
This issues you listed are gray areas; I won't debate each with you.  I
respect your opinion.  I just don't see the value of a section for *test*
bug fixes.  A user wants to know about the improvements, features, and bug
fixes (to a running Solr instance).  Everything else is just not as
interesting to a user so goes in other, even though technically it's a bug
fix (to a test).

On Thu, Apr 5, 2018 at 2:12 PM Chris Hostetter 
wrote:

>
> : I just read through 7.4's Other Changes.  IMO they belong where they are
> : except maybe SOLR-12176 (just by reading the notes here; I didn't go into
> : any issue).  What is the point of adding a new section pertaining to
> tests?
>
> Some of those are clearly "there was a bug ... and i fixed it" type issues
> -- which means they *should* belong in "Bug Fixes" ... the only reazon i
> can think of why they aren't is because they are specifically "there was a
> bug IN A TEST ... and i fixed it" type issues, and i'm speculating that
> they wound up in "Other" because folks didn't think of them as "real bugs"
> ...
> that's my only reason for suggesting a "Test Improvements/Fixes"
>
> IE: I don't think we need a new section, but if people don't want to put
> "test bug fixes" into "bug fixes" then i'd rather have a new section then
> dump them in "other"
>
>
>
> Some of these though -- based purely on reading the CHANGES entry from
> an end users perspective -- read as straight up bug fixes or new features
> in solr itself...
>
>
> * SOLR-12086: Fix format problem in FastLRUCache description string shown
> on Cache Statistics page.
>... why is that not listed as a bug fix? it was evidently a problem
> that's now fixed -- isn't that the definition of a bug fix?
>
> * SOLR-12095: AutoScalingHandler validates trigger configurations before
> updating Zookeeper.
>... why is that not listed as a bug fix (or a at least a new feature) ?
> it certainly sounds like prior to this there would have been a very bad
> outcome if you tried to use an invalid trigger confix.
>
> * SOLR-12176: Improve FORCELEADER to handle the case when a replica win
> the election but does not present in clusterstate
>... why is that not listed as a bug fix (or a at least a new feature) ?
> ... again: it sounds really scary that prior to this "something"
> (presumably bad) would hapen if a replica not in the cluster state won the
> election.
>
>
> Maybe the issue is that these are just poorly worded CHANGES entires that
> make things sound worse/better/more-significant then they really are? but
> if that's the case let's fix the text to more accurately reflect why they
> aren't significant enough to be considered "bugs" (or "new features" if
> people feel there is justification in saying "it wasn't really broken
> before, but it's better now").
>
> As things stand now, from the perspective of a user, i'm left thinking
> "Whoa ... if autoscaling triggers weren't validated before this release,
> and that didn't even merit being categorized as a 'bug fix' and was just
> noted as an 'Other' change, then what other really scary stuff might not
> even merrit a mention at all?
>
>
> : On Thu, Apr 5, 2018 at 1:22 PM Chris Hostetter  >
> : wrote:
> :
> : >
> : > The "Other Changes" list in the 7.4 section of solr/CHANGES.txt is
> : > currently the largest list (by number if jiras) for all of 7.4 -- and
> : > includes many things that AFAICT really seem like they should
> : > be listed in one of the more specific list: New Features, Bug Fixes,
> : > Optimizations.
> : >
> : > I would like to suggest that committers should really second guess any
> : > inclinaion to put something in "Other Changes" before doing so .. it
> : > should really be the choice of last resort.  users should be able to
> : > understand at a glance what important changes tye may care about, and
> : > burying stuffin "Other" makes that hard.
> : >
> : > A good rule of thumb is that if your CHANGES entry uses words "Fix" or
> : > "Improve" then that really sounds like a Bug Fix.  If folks are worried
> : > about "pollutting"  the Bug Fixes section with fixes to *test* bugs,
> then
> : > let's break them out into a new "Test Improvements/Fixes" section.
> : >
> : >
> : >
> : > -Hoss
> : > http://www.lucidworks.com/
> : >
> : > -
> : > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> : > For additional commands, e-mail: dev-h...@lucene.apache.org
> : >
> : > --
> : Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> : LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> : http://www.solrenterprisesearchserver.com
> :
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, 

[jira] [Commented] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document

2018-04-05 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427422#comment-16427422
 ] 

Alan Woodward commented on LUCENE-8229:
---

bq. because even a no-match response requires knowledge of the field

Thinking about it, this is unnecessary, isn't it.  We can have a specialised 
Matches object which just means 'a match in this doc, but no term hits', which 
would be returned by default if the scorer matched.  Which would allow a 
default implementation.  I'll work on a new patch.

> Add a method to Weight to retrieve matches for a single document
> 
>
> Key: LUCENE-8229
> URL: https://issues.apache.org/jira/browse/LUCENE-8229
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8229.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The ability to find out exactly what a query has matched on is a fairly 
> frequent feature request, and would also make highlighters much easier to 
> implement.  There have been a few attempts at doing this, including adding 
> positions to Scorers, or re-writing queries as Spans, but these all either 
> compromise general performance or involve up-front knowledge of all queries.
> Instead, I propose adding a method to Weight that exposes an iterator over 
> matches in a particular document and field.  It should be used in a similar 
> manner to explain() - ie, just for TopDocs, not as part of the scoring loop, 
> which relieves some of the pressure on performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 33 - Still Unstable

2018-04-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/33/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testBasics

Error Message:
Error from server at 
https://127.0.0.1:36990/solr/testcollection_shard1_replica_n2: Expected mime 
type application/octet-stream but got text/html.Error 404 
Can not find: /solr/testcollection_shard1_replica_n2/update  
HTTP ERROR 404 Problem accessing 
/solr/testcollection_shard1_replica_n2/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.8.v20171121  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:36990/solr/testcollection_shard1_replica_n2: 
Expected mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/testcollection_shard1_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason:
Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.8.v20171121




at 
__randomizedtesting.SeedInfo.seed([E7C3508D25B29B7B:DA1BFEA11D5CC50B]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:127)
at 
org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[GitHub] lucene-solr issue #312: Solr 11898

2018-04-05 Thread tflobbe
Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/312
  
This was merged already but the committer but not closed. Could you close 
@millerjeff0 ?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12139) Support "eq" function for string fields

2018-04-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12139:

Attachment: SOLR-12139.patch

> Support "eq" function for string fields
> ---
>
> Key: SOLR-12139
> URL: https://issues.apache.org/jira/browse/SOLR-12139
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Andrey Kudryavtsev
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, 
> SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, 
> SOLR-12139.patch
>
>
> I just discovered that {{eq}} user function will work for numeric fields only.
> For string types it results in {{java.lang.UnsupportedOperationException}}
> What do you think if we will extend it to support at least some of string 
> types as well?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12139) Support "eq" function for string fields

2018-04-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427391#comment-16427391
 ] 

David Smiley commented on SOLR-12139:
-

Andrey,
Your latest patch calls objectVal() on the FunctionValues and then later 
potentially calls longVal() or doubleVal().  This can be a bunch of extra work 
internally (e.g. re-seek of sparse docValues), and is wasteful since you 
already have the object version!  I fixed this, added some docs, and 
restructured the if/else a little bit.

> Support "eq" function for string fields
> ---
>
> Key: SOLR-12139
> URL: https://issues.apache.org/jira/browse/SOLR-12139
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Andrey Kudryavtsev
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, 
> SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, 
> SOLR-12139.patch
>
>
> I just discovered that {{eq}} user function will work for numeric fields only.
> For string types it results in {{java.lang.UnsupportedOperationException}}
> What do you think if we will extend it to support at least some of string 
> types as well?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We should be using the "Other Changes" section of CHANGES.txt more sparingly

2018-04-05 Thread Chris Hostetter

: I just read through 7.4's Other Changes.  IMO they belong where they are
: except maybe SOLR-12176 (just by reading the notes here; I didn't go into
: any issue).  What is the point of adding a new section pertaining to tests?

Some of those are clearly "there was a bug ... and i fixed it" type issues 
-- which means they *should* belong in "Bug Fixes" ... the only reazon i 
can think of why they aren't is because they are specifically "there was a 
bug IN A TEST ... and i fixed it" type issues, and i'm speculating that 
they wound up in "Other" because folks didn't think of them as "real bugs" ... 
that's my only reason for suggesting a "Test Improvements/Fixes"

IE: I don't think we need a new section, but if people don't want to put
"test bug fixes" into "bug fixes" then i'd rather have a new section then 
dump them in "other"



Some of these though -- based purely on reading the CHANGES entry from 
an end users perspective -- read as straight up bug fixes or new features 
in solr itself... 


* SOLR-12086: Fix format problem in FastLRUCache description string shown 
on Cache Statistics page.
   ... why is that not listed as a bug fix? it was evidently a problem 
that's now fixed -- isn't that the definition of a bug fix?

* SOLR-12095: AutoScalingHandler validates trigger configurations before 
updating Zookeeper.
   ... why is that not listed as a bug fix (or a at least a new feature) ? 
it certainly sounds like prior to this there would have been a very bad 
outcome if you tried to use an invalid trigger confix.

* SOLR-12176: Improve FORCELEADER to handle the case when a replica win 
the election but does not present in clusterstate
   ... why is that not listed as a bug fix (or a at least a new feature) ? 
... again: it sounds really scary that prior to this "something" 
(presumably bad) would hapen if a replica not in the cluster state won the 
election.


Maybe the issue is that these are just poorly worded CHANGES entires that 
make things sound worse/better/more-significant then they really are? but 
if that's the case let's fix the text to more accurately reflect why they 
aren't significant enough to be considered "bugs" (or "new features" if 
people feel there is justification in saying "it wasn't really broken 
before, but it's better now").

As things stand now, from the perspective of a user, i'm left thinking 
"Whoa ... if autoscaling triggers weren't validated before this release, 
and that didn't even merit being categorized as a 'bug fix' and was just 
noted as an 'Other' change, then what other really scary stuff might not 
even merrit a mention at all?


: On Thu, Apr 5, 2018 at 1:22 PM Chris Hostetter 
: wrote:
: 
: >
: > The "Other Changes" list in the 7.4 section of solr/CHANGES.txt is
: > currently the largest list (by number if jiras) for all of 7.4 -- and
: > includes many things that AFAICT really seem like they should
: > be listed in one of the more specific list: New Features, Bug Fixes,
: > Optimizations.
: >
: > I would like to suggest that committers should really second guess any
: > inclinaion to put something in "Other Changes" before doing so .. it
: > should really be the choice of last resort.  users should be able to
: > understand at a glance what important changes tye may care about, and
: > burying stuffin "Other" makes that hard.
: >
: > A good rule of thumb is that if your CHANGES entry uses words "Fix" or
: > "Improve" then that really sounds like a Bug Fix.  If folks are worried
: > about "pollutting"  the Bug Fixes section with fixes to *test* bugs, then
: > let's break them out into a new "Test Improvements/Fixes" section.
: >
: >
: >
: > -Hoss
: > http://www.lucidworks.com/
: >
: > -
: > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: > For additional commands, e-mail: dev-h...@lucene.apache.org
: >
: > --
: Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
: LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
: http://www.solrenterprisesearchserver.com
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-04-05 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-7887.
--
Resolution: Fixed

Each of the master and branch_7x Maven jobs have succeeded twice, so resolving 
this issue again.

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-7887-WIP.patch, SOLR-7887-eoe-review.patch, 
> SOLR-7887-eoe-review.patch, SOLR-7887-fix-maven-compilation.patch, 
> SOLR-7887-followup_1.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887_followup_2.patch, SOLR-7887_followup_2.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #350: SOLR match mode change for the rouding off instead o...

2018-04-05 Thread tflobbe
Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/350
  
I didn't look at the change, but you should create a Jira issue in 
https://issues.apache.org/jira/projects/SOLR first. Then you can change the 
title of the PR to include the Jira code to get them linked.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12175) Add random field type and dynamic field to the default managed-schema

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427366#comment-16427366
 ] 

ASF subversion and git services commented on SOLR-12175:


Commit d2845b033e3d2b7c09c013742a60bc5826c5f5f2 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d2845b0 ]

SOLR-12175: Fix TestConfigSetsAPI


> Add random field type and dynamic field to the default managed-schema
> -
>
> Key: SOLR-12175
> URL: https://issues.apache.org/jira/browse/SOLR-12175
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12175.patch
>
>
> Currently the default manage-schema file doesn't have the random field 
> configured. Both the techproducts and example manage-schema files have it 
> configured. This ticket will add the random dynamic field and field type to 
> the default managed-schema so this functionality is available out of the box 
> when using the default schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12183) Refactor Streaming Expression test cases

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427367#comment-16427367
 ] 

ASF subversion and git services commented on SOLR-12183:


Commit 4137f320aab4cb69ced9b8da352dd5ad5e1576c3 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4137f32 ]

SOLR-12183: Fix precommit


> Refactor Streaming Expression test cases
> 
>
> Key: SOLR-12183
> URL: https://issues.apache.org/jira/browse/SOLR-12183
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
>
> This ticket will breakup the StreamExpressionTest into multiple smaller files 
> based on the following areas:
> 1) Stream Sources
> 2) Stream Decorators
> 3) Stream Evaluators (This may have to be broken up more in the future)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12175) Add random field type and dynamic field to the default managed-schema

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427363#comment-16427363
 ] 

ASF subversion and git services commented on SOLR-12175:


Commit d420139c27013ddd8c5aab9dea79dab09736e869 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d420139 ]

SOLR-12175: Add random field type and dynamic field to the default 
managed-schema


> Add random field type and dynamic field to the default managed-schema
> -
>
> Key: SOLR-12175
> URL: https://issues.apache.org/jira/browse/SOLR-12175
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12175.patch
>
>
> Currently the default manage-schema file doesn't have the random field 
> configured. Both the techproducts and example manage-schema files have it 
> configured. This ticket will add the random dynamic field and field type to 
> the default managed-schema so this functionality is available out of the box 
> when using the default schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12183) Refactor Streaming Expression test cases

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427365#comment-16427365
 ] 

ASF subversion and git services commented on SOLR-12183:


Commit c58516edf1b1525e1341f1427cae066b105c9047 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c58516e ]

SOLR-12183: Remove dead code


> Refactor Streaming Expression test cases
> 
>
> Key: SOLR-12183
> URL: https://issues.apache.org/jira/browse/SOLR-12183
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
>
> This ticket will breakup the StreamExpressionTest into multiple smaller files 
> based on the following areas:
> 1) Stream Sources
> 2) Stream Decorators
> 3) Stream Evaluators (This may have to be broken up more in the future)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12183) Refactor Streaming Expression test cases

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427364#comment-16427364
 ] 

ASF subversion and git services commented on SOLR-12183:


Commit 80375acb7f696df7fb3cf0424d5e82777e3f5c87 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=80375ac ]

SOLR-12183: Refactor Streaming Expression test cases


> Refactor Streaming Expression test cases
> 
>
> Key: SOLR-12183
> URL: https://issues.apache.org/jira/browse/SOLR-12183
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
>
> This ticket will breakup the StreamExpressionTest into multiple smaller files 
> based on the following areas:
> 1) Stream Sources
> 2) Stream Decorators
> 3) Stream Evaluators (This may have to be broken up more in the future)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-8239.
-
   Resolution: Fixed
 Assignee: Karl Wright
Fix Version/s: master (8.0)
   7.4
   6.7

> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Fix For: 6.7, 7.4, master (8.0)
>
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427360#comment-16427360
 ] 

ASF subversion and git services commented on LUCENE-8239:
-

Commit db3a89ed4d4bfb6c8d641bcd5e30cbc307beded7 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=db3a89e ]

LUCENE-8239: Handle degenerate vector case on linear edge evaluation.


> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427356#comment-16427356
 ] 

ASF subversion and git services commented on LUCENE-8239:
-

Commit 74c2b798eb5bf02bf161f92c17f94969dba49958 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=74c2b79 ]

LUCENE-8239: Handle degenerate vector case on linear edge evaluation.


> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427357#comment-16427357
 ] 

ASF subversion and git services commented on LUCENE-8239:
-

Commit 25720fc4dc325b5a1bbcbd4f1c27b3070ef5a1e9 in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=25720fc ]

LUCENE-8239: Handle degenerate vector case on linear edge evaluation.


> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We should be using the "Other Changes" section of CHANGES.txt more sparingly

2018-04-05 Thread David Smiley
I just read through 7.4's Other Changes.  IMO they belong where they are
except maybe SOLR-12176 (just by reading the notes here; I didn't go into
any issue).  What is the point of adding a new section pertaining to tests?

~ David

On Thu, Apr 5, 2018 at 1:22 PM Chris Hostetter 
wrote:

>
> The "Other Changes" list in the 7.4 section of solr/CHANGES.txt is
> currently the largest list (by number if jiras) for all of 7.4 -- and
> includes many things that AFAICT really seem like they should
> be listed in one of the more specific list: New Features, Bug Fixes,
> Optimizations.
>
> I would like to suggest that committers should really second guess any
> inclinaion to put something in "Other Changes" before doing so .. it
> should really be the choice of last resort.  users should be able to
> understand at a glance what important changes tye may care about, and
> burying stuffin "Other" makes that hard.
>
> A good rule of thumb is that if your CHANGES entry uses words "Fix" or
> "Improve" then that really sounds like a Bug Fix.  If folks are worried
> about "pollutting"  the Bug Fixes section with fixes to *test* bugs, then
> let's break them out into a new "Test Improvements/Fixes" section.
>
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


We should be using the "Other Changes" section of CHANGES.txt more sparingly

2018-04-05 Thread Chris Hostetter


The "Other Changes" list in the 7.4 section of solr/CHANGES.txt is 
currently the largest list (by number if jiras) for all of 7.4 -- and 
includes many things that AFAICT really seem like they should 
be listed in one of the more specific list: New Features, Bug Fixes, 
Optimizations.


I would like to suggest that committers should really second guess any 
inclinaion to put something in "Other Changes" before doing so .. it 
should really be the choice of last resort.  users should be able to 
understand at a glance what important changes tye may care about, and 
burying stuffin "Other" makes that hard.


A good rule of thumb is that if your CHANGES entry uses words "Fix" or 
"Improve" then that really sounds like a Bug Fix.  If folks are worried 
about "pollutting"  the Bug Fixes section with fixes to *test* bugs, then 
let's break them out into a new "Test Improvements/Fixes" section.




-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427271#comment-16427271
 ] 

ASF subversion and git services commented on SOLR-12134:


Commit 2573eac1c2cddaf8d818e5be02eef2dd7f4c178f in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2573eac ]

SOLR-12134: CHANGES entry: ref-guide 'bare-bones html' validation is now part 
of 'ant documentation' and validates javadoc links locally


> validate links to javadocs in ref-guide & hook all ref-guide validation into 
> top level documentation/precommit
> --
>
> Key: SOLR-12134
> URL: https://issues.apache.org/jira/browse/SOLR-12134
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12134.patch, SOLR-12134.patch, 
> nocommit.SOLR-12134.sample-failures.patch
>
>
> We've seen a couple problems come up recently where the ref-guide had broken 
> links ot javadocs.
> In some cases these are because people made typos in java classnames / 
> pathnames while editing the docs - but in other cases the problems were that 
> the docs were correct at one point, but then later the class was 
> moved/renamed/removed, or had it's access level downgraded from public to 
> private (after deprecation)
> I've worked up a patch with some ideas to help us catch these types of 
> mistakes - and in general to hook the "bare-bones HTML" validation (which 
> does not require jekyll or any non-ivy managed external dependencies) into 
> {{ant precommit}}
> Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427270#comment-16427270
 ] 

ASF subversion and git services commented on SOLR-12134:


Commit 65f13289b766b240dd821b293e312623fa2fe74e in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=65f1328 ]

SOLR-12134: CHANGES entry: ref-guide 'bare-bones html' validation is now part 
of 'ant documentation' and validates javadoc links locally

(cherry picked from commit 2573eac1c2cddaf8d818e5be02eef2dd7f4c178f)


> validate links to javadocs in ref-guide & hook all ref-guide validation into 
> top level documentation/precommit
> --
>
> Key: SOLR-12134
> URL: https://issues.apache.org/jira/browse/SOLR-12134
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12134.patch, SOLR-12134.patch, 
> nocommit.SOLR-12134.sample-failures.patch
>
>
> We've seen a couple problems come up recently where the ref-guide had broken 
> links ot javadocs.
> In some cases these are because people made typos in java classnames / 
> pathnames while editing the docs - but in other cases the problems were that 
> the docs were correct at one point, but then later the class was 
> moved/renamed/removed, or had it's access level downgraded from public to 
> private (after deprecation)
> I've worked up a patch with some ideas to help us catch these types of 
> mistakes - and in general to hook the "bare-bones HTML" validation (which 
> does not require jekyll or any non-ivy managed external dependencies) into 
> {{ant precommit}}
> Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427260#comment-16427260
 ] 

ASF subversion and git services commented on SOLR-12134:


Commit c3ee86bc3f68245a5271b1dfe23ae9f3a84112c9 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c3ee86b ]

SOLR-12134: hook ref-guide 'bare-bones-html' validation into top level 
documentation target using relative javadoc URL prefixess that are now 
validated to point to real files

(cherry picked from commit c0709f113d78ee5e033edfef24e027bc63fa96f9)


> validate links to javadocs in ref-guide & hook all ref-guide validation into 
> top level documentation/precommit
> --
>
> Key: SOLR-12134
> URL: https://issues.apache.org/jira/browse/SOLR-12134
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12134.patch, SOLR-12134.patch, 
> nocommit.SOLR-12134.sample-failures.patch
>
>
> We've seen a couple problems come up recently where the ref-guide had broken 
> links ot javadocs.
> In some cases these are because people made typos in java classnames / 
> pathnames while editing the docs - but in other cases the problems were that 
> the docs were correct at one point, but then later the class was 
> moved/renamed/removed, or had it's access level downgraded from public to 
> private (after deprecation)
> I've worked up a patch with some ideas to help us catch these types of 
> mistakes - and in general to hook the "bare-bones HTML" validation (which 
> does not require jekyll or any non-ivy managed external dependencies) into 
> {{ant precommit}}
> Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-04-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427252#comment-16427252
 ] 

Yonik Seeley commented on SOLR-12155:
-

If we wrap in a SolrException, we can specify the exact error code... (a 
generic RuntimeException would result in a 500 I think?)
Not sure what the right error code is for something like OOM though.  Probably 
the most important thing is that the source of the error is both logged and 
returned to the client.

> Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField() 
> 
>
> Key: SOLR-12155
> URL: https://issues.apache.org/jira/browse/SOLR-12155
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Kishor gandham
>Priority: Major
> Attachments: SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, 
> stack.txt
>
>
> I am attaching a stack trace from our production Solr (7.2.1). Occasionally, 
> we are seeing SOLR becoming unresponsive. We are then forced to kill the JVM 
> and start solr again.
> We have a lot of facet queries and our index has approximately 15 million 
> documents. We have recently started using json.facet queries and some of the 
> facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: solr

2018-04-05 Thread Chris Hostetter

Possibly/Probably to reproduce/repeat/expand/disprove on previous work 
done by someone else.

: Date: Thu, 5 Apr 2018 09:18:35 -0700
: From: Walter Underwood 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: Re: solr
: 
: But…why do you want an obsolete version of Solr? 
: 
: 4.3.1 is from almost five years ago!
: 
: wunder
: Walter Underwood
: wun...@wunderwood.org
: http://observer.wunderwood.org/  (my blog)
: 
: > On Apr 5, 2018, at 9:05 AM, Shawn Heisey  wrote:
: > 
: > On 4/5/2018 7:32 AM, Steve Rowe wrote:
: >> You can find all past versions here: 
http://archive.apache.org/dist/lucene/solr/
: > 
: > Also, the source code for releases back to 3.1.0 are definitely included in 
a checkout from the git repository as tag branches.  So if you do a "git 
clone", you'll have all of that.
: > 
: > https://wiki.apache.org/solr/HowToContribute#Getting_the_source_code
: > 
: > Before that release, Solr was in a separate repository from Lucene, but it 
does look like there might be tags for older releases in the main repository.
: > 
: > Thanks,
: > Shawn
: > 
: > 
: > -
: > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: > For additional commands, e-mail: dev-h...@lucene.apache.org
: > 
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-12191) ref guide copy field page doesn't mention glob in dest

2018-04-05 Thread Hoss Man (JIRA)
Hoss Man created SOLR-12191:
---

 Summary: ref guide copy field page doesn't mention glob in dest
 Key: SOLR-12191
 URL: https://issues.apache.org/jira/browse/SOLR-12191
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Hoss Man


A user on the mailing list asked about things like this...

{noformat}
  
  
{noformat}

which lead me to realize this is not currently docummented at all on this 
page...

https://lucene.apache.org/solr/guide/copying-fields.html

(it use to be demoed in the example schema, but as those got simplified/purged 
it isn't really spelled out anywhere)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9640) Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2018-04-05 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427190#comment-16427190
 ] 

Lucene/Solr QA commented on SOLR-9640:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}211m 28s{color} 
| {color:red} core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}215m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.search.TestReRankQParserPlugin |
|   | solr.schema.SpatialRPTFieldTypeTest |
|   | solr.request.TestWriterPerf |
|   | solr.TestTrie |
|   | solr.search.AnalyticsMergeStrategyTest |
|   | solr.handler.XsltUpdateRequestHandlerTest |
|   | solr.handler.component.ResponseLogComponentTest |
|   | solr.search.TestSurroundQueryParser |
|   | solr.search.TestStressRecovery |
|   | solr.core.TestSolrDeletionPolicy2 |
|   | solr.search.TestIndexSearcher |
|   | solr.spelling.WordBreakSolrSpellCheckerTest |
|   | solr.search.TestFiltering |
|   | solr.schema.SchemaVersionSpecificBehaviorTest |
|   | solr.client.solrj.embedded.TestEmbeddedSolrServerAdminHandler |
|   | solr.TestJoin |
|   | solr.search.TestRecovery |
|   | solr.handler.component.DebugComponentTest |
|   | solr.search.TestExtendedDismaxParser |
|   | solr.search.json.TestJsonRequest |
|   | solr.schema.TestSchemalessBufferedUpdates |
|   | solr.schema.ExternalFileFieldSortTest |
|   | solr.handler.loader.JavabinLoaderTest |
|   | solr.search.TestXmlQParserPlugin |
|   | solr.util.SolrPluginUtilsTest |
|   | solr.update.TestExceedMaxTermLength |
|   | solr.search.join.BlockJoinFacetSimpleTest |
|   | solr.spelling.DirectSolrSpellCheckerTest |
|   | solr.spelling.SpellCheckCollatorTest |
|   | solr.util.TestMaxTokenLenTokenizer |
|   | solr.spelling.suggest.TestPhraseSuggestions |
|   | solr.schema.TestOmitPositions |
|   | solr.request.TestFaceting |
|   | solr.schema.TestSortableTextField |
|   | solr.index.UninvertDocValuesMergePolicyTest |
|   | solr.update.processor.StatelessScriptUpdateProcessorFactoryTest |
|   | solr.search.similarities.TestClassicSimilarityFactory |
|   | solr.ConvertedLegacyTest |
|   | solr.spelling.suggest.TestFileDictionaryLookup |
|   | solr.update.UpdateParamsTest |
|   | solr.update.SoftAutoCommitTest |
|   | solr.request.TestIntervalFaceting |
|   | solr.EchoParamsTest |
|   | solr.TestCrossCoreJoin |
|   | solr.AnalysisAfterCoreReloadTest |
|   | solr.response.TestPHPSerializedResponseWriter |
|   | solr.search.join.TestScoreJoinQPNoScore |
|   | solr.search.TestValueSourceCache |
|   | solr.search.TestQueryTypes |
|   | solr.search.QueryEqualityTest |
|   | solr.search.TestSort |
|   | solr.response.transform.TestSubQueryTransformerCrossCore |
|   | solr.handler.component.SpellCheckComponentTest |
|   | solr.core.TestSolrIndexConfig |
|   | solr.highlight.HighlighterConfigTest |
|   | solr.schema.IndexSchemaRuntimeFieldTest |
|   | solr.handler.component.TestHttpShardHandlerFactory |
|   | solr.handler.MoreLikeThisHandlerTest |
|   | solr.metrics.SolrCoreMetricManagerTest |
|   | solr.response.transform.TestSubQueryTransformer |
|   | solr.update.TestInPlaceUpdatesStandalone |
|   | solr.search.TestSolr4Spatial2 |
|   | solr.update.processor.TestPartialUpdateDeduplication |
|   | solr.schema.ChangedSchemaMergeTest |
|   | solr.CursorPagingTest |
|   | solr.request.macro.TestMacros |
|   | solr.schema.BooleanFieldTest |
|   | solr.core.SolrCoreTest |
|   | 

[jira] [Resolved] (SOLR-12189) Cannot create a collection with _version_ as indexed=true and docValues="false"

2018-04-05 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-12189.
--
Resolution: Not A Problem

My mistake. I was looking at an example where stored was true but actually it 
wasn't. So it was a user error 

> Cannot create a collection with _version_ as indexed=true and 
> docValues="false"
> ---
>
> Key: SOLR-12189
> URL: https://issues.apache.org/jira/browse/SOLR-12189
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Varun Thacker
>Priority: Major
>
> When I tried created a collection with the following definition for the 
> version field , Solr refused to create the collection
> {code:java}
>  docValues="false"/>
>  precisionStep="0" positionIncrementGap="0"/>{code}
>  
> {code:java}
> 2018-04-05 06:39:59.900 ERROR (qtp581374081-15) [c:test_version s:shard1 
> r:core_node1 x:test_version_shard1_replica_n1] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'test_version_shard1_replica_n1': Unable to create core 
> [test_version_shard1_replica_n1] Caused by: _version_ field must exist in 
> schema and be searchable (indexed or docValues) and retrievable(stored or 
> docValues) and not multiValued (_version_ not retrievable
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:949)
> at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$168(CoreAdminOperation.java:91)
> at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
> ...
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [test_version_shard1_replica_n1]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:996)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:916)
> ... 37 more
> Caused by: org.apache.solr.common.SolrException: Schema will not work with 
> SolrCloud mode: _version_ field must exist in schema and be searchable 
> (indexed or docValues) and retrievable(stored or docValues) and not 
> multiValued (_version_ not retrievable
> {code}
> Based the the error message the create collection command should have 
> succeeded 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: solr

2018-04-05 Thread Walter Underwood
But…why do you want an obsolete version of Solr? 

4.3.1 is from almost five years ago!

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Apr 5, 2018, at 9:05 AM, Shawn Heisey  wrote:
> 
> On 4/5/2018 7:32 AM, Steve Rowe wrote:
>> You can find all past versions here: 
>> http://archive.apache.org/dist/lucene/solr/
> 
> Also, the source code for releases back to 3.1.0 are definitely included in a 
> checkout from the git repository as tag branches.  So if you do a "git 
> clone", you'll have all of that.
> 
> https://wiki.apache.org/solr/HowToContribute#Getting_the_source_code
> 
> Before that release, Solr was in a separate repository from Lucene, but it 
> does look like there might be tags for older releases in the main repository.
> 
> Thanks,
> Shawn
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Commented] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document

2018-04-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427185#comment-16427185
 ] 

David Smiley commented on LUCENE-8229:
--

It's really looking great Alan.  I looked over your patch a bit more

* I wonder if "Matches" sounds too generic; perhaps "PositionMatches" to 
emphasize it has position information and not simply matching document IDs?
* It's a shame that every Weight must implement this (no default impl) because 
even a no-match response requires knowledge of the field.  Is the distinction 
important to know the field?  I suppose it might be useful for figuring out 
generically which fields a query references... but no not really because you 
have to execute it on a matching document first to even figure that out with 
this API.
* Matcher.EMPTY (a empty version of MatchesIterator) should perhaps be moved to 
MatchesIterator?  Come to think of it, maybe MatchesIterator could be 
Matches.Iterator (inner class of Matches)?  (avoids polluting the busy .search 
namespace).
* RE payloads: I appreciate you want to keep things simple for now.  I've heard 
of putting OCR document offset information in them, for example, and a 
highlighter might want this.  A highlighter might want whatever metadata is 
being put in a payload, even if it is relevancy oriented -- consider a 
relevancy debugger tool that could show you what's in the payload.  This might 
not even be a "highlighter" per-se.

> Add a method to Weight to retrieve matches for a single document
> 
>
> Key: LUCENE-8229
> URL: https://issues.apache.org/jira/browse/LUCENE-8229
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8229.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The ability to find out exactly what a query has matched on is a fairly 
> frequent feature request, and would also make highlighters much easier to 
> implement.  There have been a few attempts at doing this, including adding 
> positions to Scorers, or re-writing queries as Spans, but these all either 
> compromise general performance or involve up-front knowledge of all queries.
> Instead, I propose adding a method to Weight that exposes an iterator over 
> matches in a particular document and field.  It should be used in a similar 
> manner to explain() - ie, just for TopDocs, not as part of the scoring loop, 
> which relieves some of the pressure on performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427173#comment-16427173
 ] 

Karl Wright commented on LUCENE-8239:
-

I committed a fix for case 1.



> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427171#comment-16427171
 ] 

ASF subversion and git services commented on LUCENE-8239:
-

Commit 585cf75125f0d3a1db5ab2e0abf13b73ce1bdc71 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=585cf75 ]

LUCENE-8239: Identify the situation where the travel and test point plane 
envelopes are off the ellipsoid and avoid them.


> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427169#comment-16427169
 ] 

ASF subversion and git services commented on LUCENE-8239:
-

Commit d1c7240f741daf8a6262591a67442b7f4c026bf3 in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d1c7240 ]

LUCENE-8239: Identify the situation where the travel and test point plane 
envelopes are off the ellipsoid and avoid them.


> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8239) GeoComplexPolygon fails when test or/and check point are near a pole

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427168#comment-16427168
 ] 

ASF subversion and git services commented on LUCENE-8239:
-

Commit 9b03f8c033e15954f4d9d1a3962cc0695d2d762d in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b03f8c ]

LUCENE-8239: Identify the situation where the travel and test point plane 
envelopes are off the ellipsoid and avoid them.


> GeoComplexPolygon fails when test or/and check point are near a pole
> 
>
> Key: LUCENE-8239
> URL: https://issues.apache.org/jira/browse/LUCENE-8239
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8239.patch
>
>
> When calling {{within}} method in GeoComplexPolygon you can get errors if the 
> test point of the polygon or the given point is near a pole.
> The reason is that one of the planes defined by these points is tangent to 
> the world therefore intersection with the above plane fails. We should 
> prevent navigating those planes ( we should not even construct them).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-04-05 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427167#comment-16427167
 ] 

Mikhail Khludnev commented on SOLR-12155:
-

It makes sense, [~ysee...@gmail.com]. Does {{RuntimeException}} suite to wrap 
the causing throwable? See [^SOLR-12155.patch]. 

> Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField() 
> 
>
> Key: SOLR-12155
> URL: https://issues.apache.org/jira/browse/SOLR-12155
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Kishor gandham
>Priority: Major
> Attachments: SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, 
> stack.txt
>
>
> I am attaching a stack trace from our production Solr (7.2.1). Occasionally, 
> we are seeing SOLR becoming unresponsive. We are then forced to kill the JVM 
> and start solr again.
> We have a lot of facet queries and our index has approximately 15 million 
> documents. We have recently started using json.facet queries and some of the 
> facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: solr

2018-04-05 Thread Shawn Heisey

On 4/5/2018 7:32 AM, Steve Rowe wrote:

You can find all past versions here: http://archive.apache.org/dist/lucene/solr/


Also, the source code for releases back to 3.1.0 are definitely included 
in a checkout from the git repository as tag branches.  So if you do a 
"git clone", you'll have all of that.


https://wiki.apache.org/solr/HowToContribute#Getting_the_source_code

Before that release, Solr was in a separate repository from Lucene, but 
it does look like there might be tags for older releases in the main 
repository.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-04-05 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12155:

Attachment: SOLR-12155.patch

> Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField() 
> 
>
> Key: SOLR-12155
> URL: https://issues.apache.org/jira/browse/SOLR-12155
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Kishor gandham
>Priority: Major
> Attachments: SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, 
> stack.txt
>
>
> I am attaching a stack trace from our production Solr (7.2.1). Occasionally, 
> we are seeing SOLR becoming unresponsive. We are then forced to kill the JVM 
> and start solr again.
> We have a lot of facet queries and our index has approximately 15 million 
> documents. We have recently started using json.facet queries and some of the 
> facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8839) Angular admin/segments display: display of deleted docs not proportional

2018-04-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427150#comment-16427150
 ] 

Erick Erickson commented on SOLR-8839:
--

This one was annoying me while working on LUCENE-7976 so I've got a fix for it 
in that JIRA.

> Angular admin/segments display: display of deleted docs not proportional
> 
>
> Key: SOLR-8839
> URL: https://issues.apache.org/jira/browse/SOLR-8839
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 5.4.1
>Reporter: Luc Vanlerberghe
>Assignee: Erick Erickson
>Priority: Minor
>
> In the /segments portion of the admin site, the segments are displayed as a 
> bar graph with the size of each bar proportional to the logarithm of the 
> segment size.
> Within each bar the number of deleted documents is shown as a dark-gray 
> portion at the end.
> Before the angular version, the size of this part was directly proportional 
> to the number of deleted documents with respect to the total number of 
> documents in the segment
> In the angular version, the dark-gray portion is way too large.
> In the previous version, the result was odd as well since it displayed a 
> proportional percentage within in a logarithmic graph.
> I'll add a PR shortly that changes the calculation so the dark-gray part 
> looks approximately proportional to the size the segment would shrink if 
> optimized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8839) Angular admin/segments display: display of deleted docs not proportional

2018-04-05 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-8839:


Assignee: Erick Erickson  (was: Upayavira)

> Angular admin/segments display: display of deleted docs not proportional
> 
>
> Key: SOLR-8839
> URL: https://issues.apache.org/jira/browse/SOLR-8839
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 5.4.1
>Reporter: Luc Vanlerberghe
>Assignee: Erick Erickson
>Priority: Minor
>
> In the /segments portion of the admin site, the segments are displayed as a 
> bar graph with the size of each bar proportional to the logarithm of the 
> segment size.
> Within each bar the number of deleted documents is shown as a dark-gray 
> portion at the end.
> Before the angular version, the size of this part was directly proportional 
> to the number of deleted documents with respect to the total number of 
> documents in the segment
> In the angular version, the dark-gray portion is way too large.
> In the previous version, the result was odd as well since it displayed a 
> proportional percentage within in a logarithmic graph.
> I'll add a PR shortly that changes the calculation so the dark-gray part 
> looks approximately proportional to the size the segment would shrink if 
> optimized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427128#comment-16427128
 ] 

ASF subversion and git services commented on SOLR-12136:


Commit d33ab1a3c0fc1d8e7d5fb8564df6b1003036ee30 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d33ab1a ]

SOLR-12136: highlighting.adoc: Add links and clarify "hl.fl" must refer to 
stored fields.

(cherry picked from commit 8b3fc53)


> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427125#comment-16427125
 ] 

ASF subversion and git services commented on SOLR-12136:


Commit 8b3fc53e6e75ecc8153ad9a8f25b70169f422c7a in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8b3fc53 ]

SOLR-12136: highlighting.adoc: Add links and clarify "hl.fl" must refer to 
stored fields.


> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12190) need to properly escape output in GraphMLResponseWriter

2018-04-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-12190:
---

Assignee: Yonik Seeley

> need to properly escape output in GraphMLResponseWriter
> ---
>
> Key: SOLR-12190
> URL: https://issues.apache.org/jira/browse/SOLR-12190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12190) need to properly escape output in GraphMLResponseWriter

2018-04-05 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-12190:
---

 Summary: need to properly escape output in GraphMLResponseWriter
 Key: SOLR-12190
 URL: https://issues.apache.org/jira/browse/SOLR-12190
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



BadApples this week

2018-04-05 Thread Erick Erickson
I'm not going to add more BadApple's this week. The number of errors
I've seen seems to have dropped quite a bit and I'm still gathering
history. Next week for sure.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12036) factor out DefaultStreamFactory class

2018-04-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427093#comment-16427093
 ] 

Joel Bernstein commented on SOLR-12036:
---

Looks good. Now clients don't have to interact with the Lang class directly. 

> factor out DefaultStreamFactory class
> -
>
> Key: SOLR-12036
> URL: https://issues.apache.org/jira/browse/SOLR-12036
> Project: Solr
>  Issue Type: Task
>  Components: streaming expressions
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12036.patch, SOLR-12036.patch
>
>
> Motivation for the proposed class is to reduce the need for 
> {{withFunctionName}} method calls in client code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12188) Inconsistent behavior with CREATE collection API

2018-04-05 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427070#comment-16427070
 ] 

Munendra S N commented on SOLR-12188:
-

[^SOLR-12188.patch]
With this patch, CREATE collection in Admin UI would mimic CREATE API behavior 
i.e, if configSet is not passed then mutable configSet with suffix 
*.AUTOCREATED* would be created.

But when _default is specified as configSet then no new mutable configSet is 
not created. In this case, collections would share the _default configSet. 
Should this also be changed?



> Inconsistent behavior with CREATE collection API
> 
>
> Key: SOLR-12188
> URL: https://issues.apache.org/jira/browse/SOLR-12188
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, config-api
>Affects Versions: 7.2
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-12188.patch
>
>
> If collection.configName is not specified during create collection then 
> _default configSet is used to create mutable configSet (with suffix 
> AUTOCREATED)
> * In the Admin UI, it is mandatory to specify configSet. This behavior is 
> inconsistent with CREATE collection API(where it is not mandatory)
> * Both in Admin UI and CREATE API, when _default is specified as configSet 
> then no mutable configSet is created. So, changes in one collection would 
> reflect in other



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >