[GitHub] [lucene-solr] janhoy merged pull request #2041: Graduate the release wizard from ALPHA

2020-10-29 Thread GitBox


janhoy merged pull request #2041:
URL: https://github.com/apache/lucene-solr/pull/2041


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #2038: SOLR-14955: Add env var options to Prometheus Export scripts.

2020-10-29 Thread GitBox


janhoy commented on a change in pull request #2038:
URL: https://github.com/apache/lucene-solr/pull/2038#discussion_r514156548



##
File path: 
solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
##
@@ -81,34 +81,44 @@ $ ./bin/solr-exporter -p 9854 -z localhost:2181/solr -f 
./conf/solr-exporter-con
 
 === Command Line Parameters
 
-The parameters in the example start commands shown above:
+The list of available parameters for the Prometheus Exporter.
+All parameters can be provided via an environment variable, instead of through 
the command line.
 
 `h`, `--help`::
 Displays command line help and usage.
 
-`-p`, `--port`::
-The port where Prometheus will listen for new data. This port will be used to 
configure Prometheus. It can be any port not already in use on your server. The 
default is `9983`.
+`-p`, `--port`, `$PORT`::
+The port where Prometheus will listen for new data. This port will be used to 
configure Prometheus.
+It can be any port not already in use on your server. The default is `9983`.

Review comment:
   I find it puzzling that default port is the same as the default port for 
embedded Zookeeper, i.e. if you start `bin/solr -c` and then start exporter on 
same host, you'll have a port collision.

##
File path: solr/contrib/prometheus-exporter/bin/solr-exporter
##
@@ -135,15 +134,42 @@ if $cygwin; then
   [ -n "$REPO" ] && REPO=`cygpath --path --windows "$REPO"`
 fi
 
+# Convert Environment Variables to Command Line Options
+EXPORTER_ARGS=()
+
+if [[ -n "$CONFIG_FILE" ]]; then
+  EXPORTER_ARGS+=(-f "$CONFIG_FILE")
+fi
+
+if [[ -n "$PORT" ]]; then

Review comment:
   I like the simplicity of `PORT`, but is there a risk that such generic 
naming could lead to conflict, i.e. that a user hast this env set for some 
other app, and then the exporter will use it?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz opened a new pull request #2044: Set pinentry-mode to loopback for artifact signing.

2020-10-29 Thread GitBox


jpountz opened a new pull request #2044:
URL: https://github.com/apache/lucene-solr/pull/2044


   On my GPG version, not doing so makes `ant sign-artifacts` fail as the
   default pinentry mode is `ask`, which wants to create a dialog that asks
   for the GPG password.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14677) DIH doesnt close DataSource when import encounters errors

2020-10-29 Thread Jason Gerlowski (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-14677.

Fix Version/s: 8.7
   master (9.0)
   Resolution: Fixed

Sure, thanks for the reminder.  There's clearly some warts in the process while 
this code is shared across two repos, issue trackers, etc.  Though seemingly 
they won't be longterm concerns.

Closed.

> DIH doesnt close DataSource when import encounters errors
> -
>
> Key: SOLR-14677
> URL: https://issues.apache.org/jira/browse/SOLR-14677
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 7.5, master (9.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Minor
> Fix For: master (9.0), 8.7
>
> Attachments: error-solr.log, no-error-solr.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> DIH imports don't close DataSource's (which can hold db connections, etc.) in 
> all cases.  Specifically, if an import runs into an unexpected error 
> forwarding processed docs to other nodes, it will neglect to close the 
> DataSource's when it finishes.
> This problem goes back to at least 7.5.  This is partially mitigated in older 
> versions of some DataSource implementations (e.g. JdbcDataSource) by means of 
> a "finalize" hook which invokes "close()" when the DataSource object is 
> garbage-collected.  In practice, this means that resources might be held open 
> longer than necessary but will be closed within a few seconds or minutes by 
> GC.  This only helps JdbcDataSource though - all other DataSource impl's risk 
> leaking resources. 
> In master/9.0, which requires a minimum of Java 11 and doesn't have the 
> finalize-hook, the connections are never cleaned up when an error is 
> encountered during DIH.  DIH will likely be removed for the 9.0 release, but 
> if it isn't this bug should be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14969) Race condition when creating cores leads to NPE in CoreAdmin STATUS

2020-10-29 Thread Andreas Hubold (Jira)
Andreas Hubold created SOLR-14969:
-

 Summary: Race condition when creating cores leads to NPE in 
CoreAdmin STATUS
 Key: SOLR-14969
 URL: https://issues.apache.org/jira/browse/SOLR-14969
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: multicore
Affects Versions: 8.6.3, 8.6
Reporter: Andreas Hubold


CoreContainer#create does not correctly handle concurrent requests to create 
the same core. There's a race condition (see also existing TODO comment in the 
code), and CoreContainer#createFromDescriptor may be called subsequently for 
the same core name.

The _second call_ then fails to create an IndexWriter, and exception handling 
causes an inconsistent CoreContainer state.

{noformat}
2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [   ] 
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core 
[blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual machine: 
/var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock

 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312)
 at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:95)
 at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367)
...
Caused by: org.apache.solr.common.SolrException: Unable to create core 
[blueprint_acgqqafsogyc_comments]
 at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1408)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1273)
 ... 47 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
 at org.apache.solr.core.SolrCore.(SolrCore.java:1071)
 at org.apache.solr.core.SolrCore.(SolrCore.java:906)
 at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1387)
 ... 48 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
 at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2184)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2308)
 at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1130)
 at org.apache.solr.core.SolrCore.(SolrCore.java:1012)
 ... 50 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by this 
virtual machine: 
/var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
 at 
org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:139)
 at 
org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
 at 
org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
 at 
org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105)
 at org.apache.lucene.index.IndexWriter.(IndexWriter.java:785)
 at 
org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:126)
 at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100)
 at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:261)
 at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:135)
 at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2145) 
{noformat}

CoreContainer#createFromDescriptor removes the CoreDescriptor when handling 
this exception. The SolrCore created for the first successful call is still 
registered in SolrCores.cores, but now there's no corresponding CoreDescriptor 
for that name anymore.

This inconsistency leads to subsequent NullPointerExceptions, for example when 
using CoreAdmin STATUS with the core name: CoreAdminOperation#getCoreStatus 
first gets the non-null SolrCore (cores.getCore(cname)) but 
core.getInstancePath() throws an NPE, because the CoreDescriptor is not 
registered anymore:

{noformat}
2020-10-27 00:29:25.353 INFO  (qtp2029754983-19) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={core=blueprint_acgqqafsogyc_comments&action=STATUS&indexInfo=false&wt=javabin&version=2}
 status=500 QTime=0
2020-10-27 00:29:25.353 ERROR (qtp2029754983-19) [   ] o.a.s.s.HttpSolrCall 
null:org.apache.solr.common.SolrException: Error handling 'STATUS' action
 at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:372)
 at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:397)
 at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:181)
...
Caused by: java.lang.NullPointerException
 at org.apache.solr.core.SolrCore.getInstancePath(SolrCore.java:333)
 

[jira] [Commented] (SOLR-14969) Race condition when creating cores leads to NPE in CoreAdmin STATUS

2020-10-29 Thread Andreas Hubold (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222908#comment-17222908
 ] 

Andreas Hubold commented on SOLR-14969:
---

I've also asked on solr-user ml: 
https://mail-archives.apache.org/mod_mbox/lucene-solr-user/202010.mbox/%3c92cd6d7a-310b-08d5-f928-c4890fb6e...@coremedia.com%3e

> Race condition when creating cores leads to NPE in CoreAdmin STATUS
> ---
>
> Key: SOLR-14969
> URL: https://issues.apache.org/jira/browse/SOLR-14969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: multicore
>Affects Versions: 8.6, 8.6.3
>Reporter: Andreas Hubold
>Priority: Major
>
> CoreContainer#create does not correctly handle concurrent requests to create 
> the same core. There's a race condition (see also existing TODO comment in 
> the code), and CoreContainer#createFromDescriptor may be called subsequently 
> for the same core name.
> The _second call_ then fails to create an IndexWriter, and exception handling 
> causes an inconsistent CoreContainer state.
> {noformat}
> 2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [   ] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core 
> [blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual 
> machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:95)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367)
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [blueprint_acgqqafsogyc_comments]
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1408)
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1273)
>  ... 47 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1071)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:906)
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1387)
>  ... 48 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2184)
>  at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2308)
>  at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1130)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1012)
>  ... 50 more
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by 
> this virtual machine: 
> /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at 
> org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:139)
>  at 
> org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
>  at 
> org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
>  at 
> org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105)
>  at org.apache.lucene.index.IndexWriter.(IndexWriter.java:785)
>  at 
> org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:126)
>  at 
> org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:261)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:135)
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2145) 
> {noformat}
> CoreContainer#createFromDescriptor removes the CoreDescriptor when handling 
> this exception. The SolrCore created for the first successful call is still 
> registered in SolrCores.cores, but now there's no corresponding 
> CoreDescriptor for that name anymore.
> This inconsistency leads to subsequent NullPointerExceptions, for example 
> when using CoreAdmin STATUS with the core name: 
> CoreAdminOperation#getCoreStatus first gets the non-null SolrCore 
> (cores.getCore(cname)) but core.getInstancePath() throws an NPE, because the 
> CoreDescriptor is not registered anymore:
> {noformat}
> 2020-10-27 00:29:25.353 INFO  (qtp2029754983-19) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores 
> params={core=blueprint_acgqqafsogyc_comments&action=STATUS&indexInfo=false&wt=javabin&version=2}
>  status=500 QTime=0
> 2020-1

[GitHub] [lucene-solr] sigram commented on a change in pull request #1962: SOLR-14749 Provide a clean API for cluster-level event processing

2020-10-29 Thread GitBox


sigram commented on a change in pull request #1962:
URL: https://github.com/apache/lucene-solr/pull/1962#discussion_r514282158



##
File path: 
solr/core/src/java/org/apache/solr/core/ClusterEventProducerFactory.java
##
@@ -0,0 +1,178 @@
+package org.apache.solr.core;
+
+import org.apache.solr.api.CustomContainerPlugins;
+import org.apache.solr.cluster.events.ClusterEvent;
+import org.apache.solr.cluster.events.ClusterEventListener;
+import org.apache.solr.cluster.events.ClusterEventProducer;
+import org.apache.solr.cluster.events.impl.DefaultClusterEventProducer;
+
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * This class helps in handling the initial registration of plugin-based 
listeners,
+ * when both the final {@link ClusterEventProducer} implementation and 
listeners
+ * are configured using plugins.
+ */
+public class ClusterEventProducerFactory implements ClusterEventProducer {
+  private Map> 
initialListeners = new HashMap<>();
+  private CustomContainerPlugins.PluginRegistryListener initialPluginListener;
+  private final CoreContainer cc;
+  private boolean created = false;
+
+  public ClusterEventProducerFactory(CoreContainer cc) {
+this.cc = cc;
+initialPluginListener = new 
CustomContainerPlugins.PluginRegistryListener() {
+  @Override
+  public void added(CustomContainerPlugins.ApiInfo plugin) {
+if (plugin == null || plugin.getInstance() == null) {
+  return;
+}
+Object instance = plugin.getInstance();
+if (instance instanceof ClusterEventListener) {
+  registerListener((ClusterEventListener) instance);
+}
+  }
+
+  @Override
+  public void deleted(CustomContainerPlugins.ApiInfo plugin) {
+if (plugin == null || plugin.getInstance() == null) {
+  return;
+}
+Object instance = plugin.getInstance();
+if (instance instanceof ClusterEventListener) {
+  unregisterListener((ClusterEventListener) instance);
+}
+  }
+
+  @Override
+  public void modified(CustomContainerPlugins.ApiInfo old, 
CustomContainerPlugins.ApiInfo replacement) {
+added(replacement);
+deleted(old);
+  }
+};
+  }
+
+  /**
+   * This method returns an initial plugin registry listener that helps to 
capture the
+   * freshly loaded listener plugins before the final cluster event producer 
is created.
+   * @return initial listener
+   */
+  public CustomContainerPlugins.PluginRegistryListener 
getPluginRegistryListener() {
+return initialPluginListener;
+  }
+
+  /**
+   * Create a {@link ClusterEventProducer} based on the current plugin 
configurations.
+   * NOTE: this method can only be called once because it has side-effects, 
such as
+   * transferring the initially collected listeners to the resulting 
producer's instance, and
+   * installing a {@link 
org.apache.solr.api.CustomContainerPlugins.PluginRegistryListener}.
+   * Calling this method more than once will result in an exception.
+   * @param plugins current plugin configurations
+   * @return configured instance of cluster event producer (with side-effects, 
see above)
+   */
+  public ClusterEventProducer create(CustomContainerPlugins plugins) {
+if (created) {
+  throw new RuntimeException("this factory can be called only once!");
+}
+final ClusterEventProducer clusterEventProducer;
+CustomContainerPlugins.ApiInfo clusterEventProducerInfo = 
plugins.getPlugin(ClusterEventProducer.PLUGIN_NAME);
+if (clusterEventProducerInfo != null) {
+  // the listener in ClusterSingletons already registered it
+  clusterEventProducer = (ClusterEventProducer) 
clusterEventProducerInfo.getInstance();
+} else {
+  // create the default impl
+  clusterEventProducer = new DefaultClusterEventProducer(cc);

Review comment:
   > There is no reason why we can't implement it. I think we should.
   
   Sure, but it's not a part of this PR. If/when we add it then we can drop 
this initialization of the default impl.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9588) Exceptions handling in methods of SegmentingTokenizerBase

2020-10-29 Thread Robert Muir (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222913#comment-17222913
 ] 

Robert Muir commented on LUCENE-9588:
-

Sorry, the JapaneseTokenizer example doesn't hold up: that's comparing apples 
with oranges. It doesn't subclass this class: so of course its incrementToken 
throws IOException: it has to read from Reader... its logic mixes that i/o with 
segmentation.

On the other hand, this subclass (the entire point of it!) is to separate these 
two things. If you want to mix i/o and segmentation (like JapaneseTokenizer, 
doing them in a streaming fashion), then this subclass is simply inappropriate 
and you should just subclass {{Tokenizer}}.

I agree that incrementSentence() should not throw IOException, that's a bug. It 
is an oversight and it gives the wrong impression. We can remove the {{throws 
IOException}} there, it doesn't break any subclasses.


> Exceptions handling in methods of SegmentingTokenizerBase
> -
>
> Key: LUCENE-9588
> URL: https://issues.apache.org/jira/browse/LUCENE-9588
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 8.6.3
>Reporter: Nguyen Minh Gia Huy
>Priority: Minor
>
> The current interface of *setNextSentence* and *incrementWord* methods in 
> *SegmentingTokenizerBase* do not define the checked exceptions, which makes 
> it troublesome to be inherited.
> For example, if we override the incrementWord  with a logic that invoke  
> incrementToken on another tokenizer, the incrementToken raises the 
> IOException but the incrementWord is not defined to handle it.
> I think having setNextSentence and incrementWord handle the IOException would 
> make the SegmentingTokenizerBase easier to be used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14963) Child "rows" param should apply per level

2020-10-29 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222922#comment-17222922
 ] 

David Smiley commented on SOLR-14963:
-

I don't really have a strong opinion on this. [~arafalov] reported the problem 
on the list.  The documentation is inconsistent with the limit.  It sounds more 
straight-forward _to me_ for the limit to be as documented, thus applied to all 
parents.  This could still theoretically mean unlimited number of docs might 
get returned if there are tons of ancestry depth... but in practice that seems 
highly unlikely.   Or we just change the docs and call it done ;-)

I also tend to find this limit a really annoying gotcha that will leave you 
puzzled.  I would _prefer_ that we default to no limit, and then give people 
controls to rein this in if it's found to be a problem.  WDYT guys?

> Child "rows" param should apply per level
> -
>
> Key: SOLR-14963
> URL: https://issues.apache.org/jira/browse/SOLR-14963
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>
> The {{[child rows=10]}} doc transformer "rows" param _should_ apply per 
> parent, and it's documented this way: "The maximum number of child documents 
> to be returned per parent document.".  However, it is instead implemented as 
> an overall limit as the child documents are processed in a depth-first order 
> way.  The implementation ought to change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-10-29 Thread Alex Klibisz (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222935#comment-17222935
 ] 

Alex Klibisz commented on LUCENE-9378:
--

Are there any suggested workarounds for this issue at the moment?

The performance hit has basically made it impossible for me to upgrade past 8.4.

> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
> Attachments: hotspots-v76x.png, hotspots-v76x.png, hotspots-v76x.png, 
> hotspots-v76x.png, hotspots-v76x.png, hotspots-v77x.png, hotspots-v77x.png, 
> hotspots-v77x.png, hotspots-v77x.png, image-2020-06-12-22-17-30-339.png, 
> image-2020-06-12-22-17-53-961.png, image-2020-06-12-22-18-24-527.png, 
> image-2020-06-12-22-18-48-919.png, snapshot-v77x.nps, snapshot-v77x.nps, 
> snapshot-v77x.nps, snapshots-v76x.nps, snapshots-v76x.nps, snapshots-v76x.nps
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene80DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11921) cursorMark with elevateIds throws Exception

2020-10-29 Thread Bernd Wahlen (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222941#comment-17222941
 ] 

Bernd Wahlen commented on SOLR-11921:
-

i also run in this bug on 8.6.3
i mean all the docs are in the result of the first page and then instead of the 
next cursormark field:
"error": {
"msg": "class java.lang.Integer cannot be cast to class 
org.apache.lucene.util.BytesRef (java.lang.Integer is in module java.base of 
loader 'bootstrap'; org.apache.lucene.util.BytesRef is in unnamed module of 
loader org.eclipse.jetty.webapp.WebAppClassLoader @329a1243)",
"trace": "java.lang.ClassCastException: class java.lang.Integer cannot be cast 
to class org.apache.lucene.util.BytesRef (java.lang.Integer is in module 
java.base of loader 'bootstrap'; org.apache.lucene.util.BytesRef is in unnamed 
module of loader org.eclipse.jetty.webapp.WebAppClassLoader @329a1243)\n\tat 
org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:1296)\n\tat
 org.apache.solr.schema.StrField.marshalSortValue(StrField.java:117)\n\tat 
org.apache.solr.search.CursorMark.getSerializedTotem(CursorMark.java:251)\n\tat 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1526)\n\tat
 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:403)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:331)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)\n\tat 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:815)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)\n\tat
 
org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:500)\n\tat 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)\n\tat
 org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)\n\tat 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)\n\tat 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)\n\tat
 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375

[jira] [Commented] (SOLR-14923) Indexing performance is unacceptable when child documents are involved

2020-10-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222944#comment-17222944
 ] 

Thomas Wöckinger commented on SOLR-14923:
-

Three test are failing if flowing code (line 503 to 505) is commented out in 
DistributedUpdateProcessor.java
{code:java}

  if(req.getSchema().isUsableForChildDocs() && 
shouldRefreshUlogCaches(cmd)) {
ulog.openRealtimeSearcher();
  }

{code}
[junit4] - org.apache.solr.cloud.NestedShardedAtomicUpdateTest.test

[junit4] - 
org.apache.solr.update.processor.NestedAtomicUpdateTest.testBlockAtomicAdd

[junit4] - 
org.apache.solr.update.processor.NestedAtomicUpdateTest.testBlockAtomicSet

A closer look at these unit test show that they are using real time searcher to 
validate the results.

So does anyone have time to take a closer look on this isssue?

> Indexing performance is unacceptable when child documents are involved
> --
>
> Key: SOLR-14923
> URL: https://issues.apache.org/jira/browse/SOLR-14923
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: master (9.0), 8.3, 8.4, 8.5, 8.6
>Reporter: Thomas Wöckinger
>Priority: Critical
>  Labels: performance
>
> Parallel indexing does not make sense at moment when child documents are used.
> The org.apache.solr.update.processor.DistributedUpdateProcessor checks at the 
> end of the method doVersionAdd if Ulog caches should be refreshed.
> This check will return true if any child document is included in the 
> AddUpdateCommand.
> If so ulog.openRealtimeSearcher(); is called, this call is very expensive, 
> and executed in a synchronized block of the UpdateLog instance, therefore all 
> other operations on the UpdateLog are blocked too.
> Because every important UpdateLog method (add, delete, ...) is done using a 
> synchronized block almost each operation is blocked.
> This reduces multi threaded index update to a single thread behavior.
> The described behavior is not depending on any option of the UpdateRequest, 
> so it does not make any difference if 'waitFlush', 'waitSearcher' or 
> 'softCommit'  is true or false.
> The described behavior makes the usage of ChildDocuments useless, because the 
> performance is unacceptable.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9591) StringIndexOutOfBoundsException in FastVectorHighlighter

2020-10-29 Thread Markus Jelsma (Jira)
Markus Jelsma created LUCENE-9591:
-

 Summary: StringIndexOutOfBoundsException in FastVectorHighlighter
 Key: LUCENE-9591
 URL: https://issues.apache.org/jira/browse/LUCENE-9591
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 8.6
Reporter: Markus Jelsma


Just spotted this one. Solr throws the exception but the request is returned 
anyway. One highlighted snippet has garbled contents. 
{code:java}
2020-10-29 15:06:30.448 ERROR (qtp2119992687-25889) [c:collection_20200511 
s:shard4 r:core_node48 x:sitesearch_20200511_shard4_replica_t47] 
o.a.s.s.HttpSolrCall null:java.lang.StringIndexOutOfBoundsException: start 
568858, end 539731, length 539732

    at 
java.base/java.lang.AbstractStringBuilder.checkRangeSIOOBE(AbstractStringBuilder.java:1724)

    at 
java.base/java.lang.AbstractStringBuilder.substring(AbstractStringBuilder.java:1017)

    at java.base/java.lang.StringBuilder.substring(StringBuilder.java:85)

    at 
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFragmentSourceMSO(BaseFragmentsBuilder.java:204)

    at 
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.makeFragment(BaseFragmentsBuilder.java:175)

    at 
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.createFragments(BaseFragmentsBuilder.java:144)

    at 
org.apache.lucene.search.vectorhighlight.FastVectorHighlighter.getBestFragments(FastVectorHighlighter.java:186)

    at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter(DefaultSolrHighlighter.java:581)

    at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingOfField(DefaultSolrHighlighter.java:531)

    at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:478)

    at 
org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:172)


 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14970) elevation does not workout elevate.xml config

2020-10-29 Thread Bernd Wahlen (Jira)
Bernd Wahlen created SOLR-14970:
---

 Summary: elevation does not workout elevate.xml config
 Key: SOLR-14970
 URL: https://issues.apache.org/jira/browse/SOLR-14970
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 8.6.3
Reporter: Bernd Wahlen


When i remove elevate.xml line from solrconfig.xml plus the file,
elevation is not working. No error is logged. We put the ids directly in the 
query and we are not using the default fields or ids, so the xml is completely 
useless, but required to let the elevation component working, example query:
http://staging.qeep.net:8983/solr/profile_v2/elevate?q=%2Bapp_sns%3A%20qeep&sort=random_4239%20desc,%20id%20desc&elevateIds=361018,361343&forceElevation=true

  

string
elevate.xml
elevated
  
  
  

  explicit


  elevator

  





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14970) elevation does not workout elevate.xml config

2020-10-29 Thread Bernd Wahlen (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Wahlen updated SOLR-14970:

Description: 
When i remove elevate.xml line from solrconfig.xml plus the file,
elevation is not working. No error is logged. We put the ids directly in the 
query and we are not using the default fields or ids, so the xml is completely 
useless, but required to let the elevation component working, example query:

{code:java}http://staging.qeep.net:8983/solr/profile_v2/elevate?q=%2Bapp_sns%3A%20qeep&sort=random_4239%20desc,%20id%20desc&elevateIds=361018,361343&forceElevation=true{code}


{code:java}
  

string
elevate.xml
elevated
  
  
  

  explicit


  elevator

  
{code}


  was:
When i remove elevate.xml line from solrconfig.xml plus the file,
elevation is not working. No error is logged. We put the ids directly in the 
query and we are not using the default fields or ids, so the xml is completely 
useless, but required to let the elevation component working, example query:
http://staging.qeep.net:8983/solr/profile_v2/elevate?q=%2Bapp_sns%3A%20qeep&sort=random_4239%20desc,%20id%20desc&elevateIds=361018,361343&forceElevation=true

  

string
elevate.xml
elevated
  
  
  

  explicit


  elevator

  




> elevation does not workout elevate.xml config
> -
>
> Key: SOLR-14970
> URL: https://issues.apache.org/jira/browse/SOLR-14970
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Bernd Wahlen
>Priority: Minor
>
> When i remove elevate.xml line from solrconfig.xml plus the file,
> elevation is not working. No error is logged. We put the ids directly in the 
> query and we are not using the default fields or ids, so the xml is 
> completely useless, but required to let the elevation component working, 
> example query:
> {code:java}http://staging.qeep.net:8983/solr/profile_v2/elevate?q=%2Bapp_sns%3A%20qeep&sort=random_4239%20desc,%20id%20desc&elevateIds=361018,361343&forceElevation=true{code}
> {code:java}
>   
> 
> string
>   elevate.xml
> elevated
>   
>   
>   
> 
>   explicit
> 
> 
>   elevator
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14970) elevation does not workout elevate.xml config

2020-10-29 Thread Bernd Wahlen (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Wahlen updated SOLR-14970:

Description: 
When i remove elevate.xml line from solrconfig.xml plus the file, elevation is 
not working and no error is logged.
We put the ids directly in the query and we are not using the default fields or 
ids, so the xml is completely useless, but required to let the elevation 
component work, example query:

{code:java}http://staging.qeep.net:8983/solr/profile_v2/elevate?q=%2Bapp_sns%3A%20qeep&sort=random_4239%20desc,%20id%20desc&elevateIds=361018,361343&forceElevation=true{code}


{code:java}
  

string
elevate.xml
elevated
  
  
  

  explicit


  elevator

  
{code}


  was:
When i remove elevate.xml line from solrconfig.xml plus the file,
elevation is not working. No error is logged. We put the ids directly in the 
query and we are not using the default fields or ids, so the xml is completely 
useless, but required to let the elevation component working, example query:

{code:java}http://staging.qeep.net:8983/solr/profile_v2/elevate?q=%2Bapp_sns%3A%20qeep&sort=random_4239%20desc,%20id%20desc&elevateIds=361018,361343&forceElevation=true{code}


{code:java}
  

string
elevate.xml
elevated
  
  
  

  explicit


  elevator

  
{code}



> elevation does not workout elevate.xml config
> -
>
> Key: SOLR-14970
> URL: https://issues.apache.org/jira/browse/SOLR-14970
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Bernd Wahlen
>Priority: Minor
>
> When i remove elevate.xml line from solrconfig.xml plus the file, elevation 
> is not working and no error is logged.
> We put the ids directly in the query and we are not using the default fields 
> or ids, so the xml is completely useless, but required to let the elevation 
> component work, example query:
> {code:java}http://staging.qeep.net:8983/solr/profile_v2/elevate?q=%2Bapp_sns%3A%20qeep&sort=random_4239%20desc,%20id%20desc&elevateIds=361018,361343&forceElevation=true{code}
> {code:java}
>   
> 
> string
>   elevate.xml
> elevated
>   
>   
>   
> 
>   explicit
> 
> 
>   elevator
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14969) Race condition when creating cores leads to NPE in CoreAdmin STATUS

2020-10-29 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222947#comment-17222947
 ] 

Erick Erickson commented on SOLR-14969:
---

First, thanks for raising this. While we might be able to get around this in 
the CoreAdmin STATUS command (I haven't looked), the fact that the Solr needs 
to be reloaded is scary enough that the root cause should be addressed.

This'll be somewhat tricky to fix. Much of the complexity here is because of 
"transient" and "lazy" cores. On a quick look at the change, I don't see 
anything obvious that would have changed the behavior of the create code, but 
perhaps it exposed an underlying issue.

*background*

There are two use-cases:

1> An installation has, say, 1,000 cores but only 100 of them are in use at any 
given time. Given enough disk space, hardware costs can be reduced by a factor 
of 10 if they can have cores load and unload dynamically on an LRU basis. This 
is the "transient" case.

2> An installation has 1,000 cores that can all be loaded at once, but startup 
time is prohibitive if Solr waits for all 1,000 cores to be loaded.  The "lazy" 
case is if they can afford to take the hit for loading when the first request 
is made to a core.

So at startup, all the CoreDescriptors are read through "core discovery", but 
the cores may or may not be loaded. Which means that there's lots of 
synchronization code and a bewildering variety of lists (pendingCoreOps, 
currentlyLoadingCores, similar lists in the TransientSolrCoreCache). 
currentlyLoadingCores is tempting, but it's for async core loading.

All the above is to emphasize that this code is gnarly for some non-obvious 
reasons, tread with care ;).

*On to this problem*

CoreContainer.create checks at the top for pre-existing cores and throws an 
error if it finds one, which if fine and good for cores that exist already 
'cause all the CoreDescriptors are read at CoreContainer initialization. 

However, when creating a new core, the new core descriptor is invisible to any 
other thread that comes in here and does the above check until sometime during 
core creation, which is where this problem arises. The check at the top won't 
"see" the core being created until it's added to some other list in SolrCores 
eventually.

I suppose we could add in something like the following: Add a new member 
variable.
{code:java}
 List inFlightCores = new ArrayList();
{code}
then wrap the entirety of CoreContainer.create in a try/finally block
{code:java}
try {
  synchronized (inFlightCores) {
 if (inFlightCores.contains(newCoreName) throw new SolrException);
inFlightCores.add(newCoreName);
  }
  rest of CoreContainer.create code
} finally {
  synchronized (inFlightCores) {
inFlightCores.remove(newCoreName)
  }
}
{code}
A significant amount of the complexity there is a result of lessons learned 
when a client made extensive use of transient cores that were created and 
destroyed, but obviously not two at once with the same name.

I'm not excited about adding an ad-hoc fix like above, but I can say with 
certainty that a more intrusive change would be difficult to get right. Pending 
a more extensive revisiting of all the core admin operations, maybe something 
like this would be the best way to go...

I'd intended just to write this up, but now that I've thought about it I'll see 
if I can work up a fix like above. Could you test a fix if I come up with one? 
I should be able to write a test though...

> Race condition when creating cores leads to NPE in CoreAdmin STATUS
> ---
>
> Key: SOLR-14969
> URL: https://issues.apache.org/jira/browse/SOLR-14969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: multicore
>Affects Versions: 8.6, 8.6.3
>Reporter: Andreas Hubold
>Priority: Major
>
> CoreContainer#create does not correctly handle concurrent requests to create 
> the same core. There's a race condition (see also existing TODO comment in 
> the code), and CoreContainer#createFromDescriptor may be called subsequently 
> for the same core name.
> The _second call_ then fails to create an IndexWriter, and exception handling 
> causes an inconsistent CoreContainer state.
> {noformat}
> 2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [   ] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core 
> [blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual 
> machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312)
>  at 
> org.apache.solr.handler.admin.CoreAdminOp

[jira] [Commented] (SOLR-14963) Child "rows" param should apply per level

2020-10-29 Thread Bar Rotstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222958#comment-17222958
 ] 

Bar Rotstein commented on SOLR-14963:
-

+1 to no limit by default.

I think in the dev forum the proposal was to limit the immediate children:
bq. So, is that (all nested children included in limit) what we actually mean? 
Or did we mean maximum number of "immediate children" for any specific 
document/level and the code is wrong?`

IMO, this makes a little more sense than the current logic.
WDYT?

> Child "rows" param should apply per level
> -
>
> Key: SOLR-14963
> URL: https://issues.apache.org/jira/browse/SOLR-14963
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>
> The {{[child rows=10]}} doc transformer "rows" param _should_ apply per 
> parent, and it's documented this way: "The maximum number of child documents 
> to be returned per parent document.".  However, it is instead implemented as 
> an overall limit as the child documents are processed in a depth-first order 
> way.  The implementation ought to change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14963) Child "rows" param should apply per level

2020-10-29 Thread Bar Rotstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222958#comment-17222958
 ] 

Bar Rotstein edited comment on SOLR-14963 at 10/29/20, 3:26 PM:


+1 to no limit by default.

I think in the dev forum the proposal was to limit the immediate children:
{quote}So, is that (all nested children included in limit) what we actually 
mean? Or did we mean maximum number of "immediate children" for any specific 
document/level and the code is wrong?`
{quote}
IMO, this makes a little more sense than the current logic.
 WDYT [~arafalov] ?


was (Author: brot):
+1 to no limit by default.

I think in the dev forum the proposal was to limit the immediate children:
bq. So, is that (all nested children included in limit) what we actually mean? 
Or did we mean maximum number of "immediate children" for any specific 
document/level and the code is wrong?`

IMO, this makes a little more sense than the current logic.
WDYT?

> Child "rows" param should apply per level
> -
>
> Key: SOLR-14963
> URL: https://issues.apache.org/jira/browse/SOLR-14963
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>
> The {{[child rows=10]}} doc transformer "rows" param _should_ apply per 
> parent, and it's documented this way: "The maximum number of child documents 
> to be returned per parent document.".  However, it is instead implemented as 
> an overall limit as the child documents are processed in a depth-first order 
> way.  The implementation ought to change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-10-29 Thread Michael Gibney (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222962#comment-17222962
 ] 

Michael Gibney commented on LUCENE-9378:


{quote}I prefer that BinaryDocValues be simple/fast as it was by default with a 
configurable option to add some compression.
{quote}
+1 for an option to bypass compression, however that's accomplished. To note 
what might be obvious: this change hasn't even fully hit yet in user-land, 
having been introduced in 8.5. Also maybe worth emphasizing that for 
bulk-export cases that access values for all docs in arbitrary order (e.g., 
Solr export handler), iiuc this would result in each block being independently 
decompressed _for each doc in that block_. Not sure whether there are analogous 
tools in elasticsearch (or other) ...

> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
> Attachments: hotspots-v76x.png, hotspots-v76x.png, hotspots-v76x.png, 
> hotspots-v76x.png, hotspots-v76x.png, hotspots-v77x.png, hotspots-v77x.png, 
> hotspots-v77x.png, hotspots-v77x.png, image-2020-06-12-22-17-30-339.png, 
> image-2020-06-12-22-17-53-961.png, image-2020-06-12-22-18-24-527.png, 
> image-2020-06-12-22-18-48-919.png, snapshot-v77x.nps, snapshot-v77x.nps, 
> snapshot-v77x.nps, snapshots-v76x.nps, snapshots-v76x.nps, snapshots-v76x.nps
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene80DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14969) Race condition when creating cores leads to NPE in CoreAdmin STATUS

2020-10-29 Thread Andreas Hubold (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222974#comment-17222974
 ] 

Andreas Hubold commented on SOLR-14969:
---

Thank you! I'm not too familiar with all this code, but your suggestion sounds 
reasonable. I just thought about a similar fix in a custom CoreAdminHandler, 
but I still have to check if that's customizable.

I don't have a stable reproducer yet, but I can still try to test a proposed 
fix. However, I will be unavailable next week.

> Race condition when creating cores leads to NPE in CoreAdmin STATUS
> ---
>
> Key: SOLR-14969
> URL: https://issues.apache.org/jira/browse/SOLR-14969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: multicore
>Affects Versions: 8.6, 8.6.3
>Reporter: Andreas Hubold
>Priority: Major
>
> CoreContainer#create does not correctly handle concurrent requests to create 
> the same core. There's a race condition (see also existing TODO comment in 
> the code), and CoreContainer#createFromDescriptor may be called subsequently 
> for the same core name.
> The _second call_ then fails to create an IndexWriter, and exception handling 
> causes an inconsistent CoreContainer state.
> {noformat}
> 2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [   ] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core 
> [blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual 
> machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:95)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367)
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [blueprint_acgqqafsogyc_comments]
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1408)
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1273)
>  ... 47 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1071)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:906)
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1387)
>  ... 48 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2184)
>  at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2308)
>  at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1130)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1012)
>  ... 50 more
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by 
> this virtual machine: 
> /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at 
> org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:139)
>  at 
> org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
>  at 
> org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
>  at 
> org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105)
>  at org.apache.lucene.index.IndexWriter.(IndexWriter.java:785)
>  at 
> org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:126)
>  at 
> org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:261)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:135)
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2145) 
> {noformat}
> CoreContainer#createFromDescriptor removes the CoreDescriptor when handling 
> this exception. The SolrCore created for the first successful call is still 
> registered in SolrCores.cores, but now there's no corresponding 
> CoreDescriptor for that name anymore.
> This inconsistency leads to subsequent NullPointerExceptions, for example 
> when using CoreAdmin STATUS with the core name: 
> CoreAdminOperation#getCoreStatus first gets the non-null SolrCore 
> (cores.getCore(cname)) but core.getInstancePath() throws an NPE, because the 
> CoreDescriptor is not registered anymore:
> {noformat}
> 2020-10-27 00:29:25.353 INFO  (qtp2029754983-19) [   ] o.a.s.s.HttpSolrCall

[GitHub] [lucene-solr] epugh opened a new pull request #2045: Solr 14968

2020-10-29 Thread GitBox


epugh opened a new pull request #2045:
URL: https://github.com/apache/lucene-solr/pull/2045


   Small JavaDoc improvement.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-10-29 Thread Nico Tonozzi (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223016#comment-17223016
 ] 

Nico Tonozzi commented on LUCENE-9378:
--

[~alexklibisz] The workaround we were forced to set our codec to a filter codec 
and use the {{Lucene70DocValuesFormat}}.

I'd add another +1 for the option to disable compression. This is going to 
prevent us from upgrading to a Lucene version that drops support for 
{{Lucene70DocValuesFormat}}.

> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
> Attachments: hotspots-v76x.png, hotspots-v76x.png, hotspots-v76x.png, 
> hotspots-v76x.png, hotspots-v76x.png, hotspots-v77x.png, hotspots-v77x.png, 
> hotspots-v77x.png, hotspots-v77x.png, image-2020-06-12-22-17-30-339.png, 
> image-2020-06-12-22-17-53-961.png, image-2020-06-12-22-18-24-527.png, 
> image-2020-06-12-22-18-48-919.png, snapshot-v77x.nps, snapshot-v77x.nps, 
> snapshot-v77x.nps, snapshots-v76x.nps, snapshots-v76x.nps, snapshots-v76x.nps
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene80DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14968) Document how to return ExternalFileField value in fl params

2020-10-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223017#comment-17223017
 ] 

ASF subversion and git services commented on SOLR-14968:


Commit d0ba0f38a9d0fbfae6da872f5f4117a1c31ecfef in lucene-solr's branch 
refs/heads/master from Eric Pugh
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d0ba0f3 ]

Enhance Javadocs for ExternalFileField on how to return values as part of 
document fields. SOLR-14968

* add helpful documentation on returning the field value

* wordsmith

> Document how to return ExternalFileField value in fl params
> ---
>
> Key: SOLR-14968
> URL: https://issues.apache.org/jira/browse/SOLR-14968
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: David Eric Pugh
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Turns out you can return a ExternalFieldField in the `fl=` params through 
> some magic.  Document this in Javadocs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh merged pull request #2045: SOLR-14968

2020-10-29 Thread GitBox


epugh merged pull request #2045:
URL: https://github.com/apache/lucene-solr/pull/2045


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14968) Document how to return ExternalFileField value in fl params

2020-10-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223019#comment-17223019
 ] 

ASF subversion and git services commented on SOLR-14968:


Commit 4af2ddaf4e197f256cf523bd6f37c8ff87800d2b in lucene-solr's branch 
refs/heads/branch_8x from Eric Pugh
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4af2dda ]

Enhance Javadocs for ExternalFileField on how to return values as part of 
document fields. SOLR-14968

* add helpful documentation on returning the field value

* wordsmith

> Document how to return ExternalFileField value in fl params
> ---
>
> Key: SOLR-14968
> URL: https://issues.apache.org/jira/browse/SOLR-14968
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: David Eric Pugh
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Turns out you can return a ExternalFieldField in the `fl=` params through 
> some magic.  Document this in Javadocs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14968) Document how to return ExternalFileField value in fl params

2020-10-29 Thread David Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Eric Pugh resolved SOLR-14968.

Fix Version/s: 8.8
   master (9.0)
 Assignee: David Eric Pugh
   Resolution: Fixed

> Document how to return ExternalFileField value in fl params
> ---
>
> Key: SOLR-14968
> URL: https://issues.apache.org/jira/browse/SOLR-14968
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: David Eric Pugh
>Assignee: David Eric Pugh
>Priority: Major
> Fix For: master (9.0), 8.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Turns out you can return a ExternalFieldField in the `fl=` params through 
> some magic.  Document this in Javadocs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14963) Child "rows" param should apply per level

2020-10-29 Thread Alexandre Rafalovitch (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223091#comment-17223091
 ] 

Alexandre Rafalovitch commented on SOLR-14963:
--

When I dug into original discussion, my feeling was that it was centered around 
shallow tree, so a parent document with a bunch of children. So, the use cases 
with grand-parents and so on were not necessarily flushed out. Which, combined 
with breadth-first iteration internally (IIRC) caused the current setup to be 
super confusing.

I think the limit should be per parent/child relationship (per level could 
still mean shared between X parents at level 1). So, If we imagine each parent 
at every level having 10 children (10^depth total nodes) and we said rows=3, we 
would get 3^depth returned. Currently, I think, if we said limit=25, we would 
get 10 top level nodes, 10 children of the first node, and  5 children of the 
second node and nothing for other 8 without any indication. And basically what 
I hit originally.

I am ok with default being unlimited. Then, if they are hit with super-deep and 
wide trees, they can use this parameter explicitly.

> Child "rows" param should apply per level
> -
>
> Key: SOLR-14963
> URL: https://issues.apache.org/jira/browse/SOLR-14963
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>
> The {{[child rows=10]}} doc transformer "rows" param _should_ apply per 
> parent, and it's documented this way: "The maximum number of child documents 
> to be returned per parent document.".  However, it is instead implemented as 
> an overall limit as the child documents are processed in a depth-first order 
> way.  The implementation ought to change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14971) AtomicUpdate 'remove' fails on 'pints' fields

2020-10-29 Thread Jason Gerlowski (Jira)
Jason Gerlowski created SOLR-14971:
--

 Summary: AtomicUpdate 'remove' fails on 'pints' fields
 Key: SOLR-14971
 URL: https://issues.apache.org/jira/browse/SOLR-14971
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: update
Affects Versions: 8.5.2
Reporter: Jason Gerlowski
 Attachments: reproduce.sh

The "remove" atomic update action on multivalued int fields fails if the 
document being changed is uncommitted.

At first glance this appears to be a type-related issue.  
AtomicUpdateDocumentMerger attempts to handle multivalued int fields by 
processing the List type, but in uncommitted docs int fields are still 
List in the tlog.  Conceptually this feels similar to SOLR-13331.

It's likely this issue also affects other numeric and date fields.

Attached is a simple script to reproduce, meant to be run from the root of a 
Solr install.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-10-29 Thread Alex Klibisz (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223132#comment-17223132
 ] 

Alex Klibisz commented on LUCENE-9378:
--

> The workaround we were forced to set our codec to a filter codec and use the 
> {{Lucene70DocValuesFormat}}.

Thanks. That gives me an idea.. I think there might be a way to specify an 
older codec for a specific field in my Elasticsearch plugin. I'll look into 
this. 

> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
> Attachments: hotspots-v76x.png, hotspots-v76x.png, hotspots-v76x.png, 
> hotspots-v76x.png, hotspots-v76x.png, hotspots-v77x.png, hotspots-v77x.png, 
> hotspots-v77x.png, hotspots-v77x.png, image-2020-06-12-22-17-30-339.png, 
> image-2020-06-12-22-17-53-961.png, image-2020-06-12-22-18-24-527.png, 
> image-2020-06-12-22-18-48-919.png, snapshot-v77x.nps, snapshot-v77x.nps, 
> snapshot-v77x.nps, snapshots-v76x.nps, snapshots-v76x.nps, snapshots-v76x.nps
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene80DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9583) How should we expose VectorValues.RandomAccess?

2020-10-29 Thread Jim Ferenczi (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223181#comment-17223181
 ] 

Jim Ferenczi commented on LUCENE-9583:
--

Ok so we need dense ordinals because you expect to have lots of documents 
without a value ? What's the intent ? Reducing the size to encode the doc ids 
in the graph ? It seems premature to me so I was wondering why we require two 
interface here. I don't understand why we have to keep two implementations. If 
random access is the way to go then we don't need the forward iterator but I 
agree with Julie that it maybe send the wrong message which is why I proposed 
to add the reset method.

> How should we expose VectorValues.RandomAccess?
> ---
>
> Key: LUCENE-9583
> URL: https://issues.apache.org/jira/browse/LUCENE-9583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael Sokolov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the newly-added {{VectorValues}} API, we have a {{RandomAccess}} 
> sub-interface. [~jtibshirani] pointed out this is not needed by some 
> vector-indexing strategies which can operate solely using a forward-iterator 
> (it is needed by HNSW), and so in the interest of simplifying the public API 
> we should not expose this internal detail (which by the way surfaces internal 
> ordinals that are somewhat uninteresting outside the random access API).
> I looked into how to move this inside the HNSW-specific code and remembered 
> that we do also currently make use of the RA API when merging vector fields 
> over sorted indexes. Without it, we would need to load all vectors into RAM  
> while flushing/merging, as we currently do in 
> {{BinaryDocValuesWriter.BinaryDVs}}. I wonder if it's worth paying this cost 
> for the simpler API.
> Another thing I noticed while reviewing this is that I moved the KNN 
> {{search(float[] target, int topK, int fanout)}} method from {{VectorValues}} 
>  to {{VectorValues.RandomAccess}}. This I think we could move back, and 
> handle the HNSW requirements for search elsewhere. I wonder if that would 
> alleviate the major concern here? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] HoustonPutman commented on a change in pull request #2038: SOLR-14955: Add env var options to Prometheus Export scripts.

2020-10-29 Thread GitBox


HoustonPutman commented on a change in pull request #2038:
URL: https://github.com/apache/lucene-solr/pull/2038#discussion_r514543448



##
File path: 
solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
##
@@ -81,34 +81,44 @@ $ ./bin/solr-exporter -p 9854 -z localhost:2181/solr -f 
./conf/solr-exporter-con
 
 === Command Line Parameters
 
-The parameters in the example start commands shown above:
+The list of available parameters for the Prometheus Exporter.
+All parameters can be provided via an environment variable, instead of through 
the command line.
 
 `h`, `--help`::
 Displays command line help and usage.
 
-`-p`, `--port`::
-The port where Prometheus will listen for new data. This port will be used to 
configure Prometheus. It can be any port not already in use on your server. The 
default is `9983`.
+`-p`, `--port`, `$PORT`::
+The port where Prometheus will listen for new data. This port will be used to 
configure Prometheus.
+It can be any port not already in use on your server. The default is `9983`.

Review comment:
   That is very fair, and something we could certainly change in a major 
version, such as 9.0.
   
   It should probably be done separately, since these changes are not 
backwards-incompatible. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] HoustonPutman commented on a change in pull request #2038: SOLR-14955: Add env var options to Prometheus Export scripts.

2020-10-29 Thread GitBox


HoustonPutman commented on a change in pull request #2038:
URL: https://github.com/apache/lucene-solr/pull/2038#discussion_r514545969



##
File path: solr/contrib/prometheus-exporter/bin/solr-exporter
##
@@ -135,15 +134,42 @@ if $cygwin; then
   [ -n "$REPO" ] && REPO=`cygpath --path --windows "$REPO"`
 fi
 
+# Convert Environment Variables to Command Line Options
+EXPORTER_ARGS=()
+
+if [[ -n "$CONFIG_FILE" ]]; then
+  EXPORTER_ARGS+=(-f "$CONFIG_FILE")
+fi
+
+if [[ -n "$PORT" ]]; then

Review comment:
   Yeah, that could certainly happen. `EXPORTER_PORT` would work perfectly 
fine. I went the direction decided in 
https://github.com/apache/lucene-solr/pull/887 on variable naming. I don't have 
a strong opinion either way.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9542) Score returned in search request is original score and not reranked score

2020-10-29 Thread Jason Baik (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Baik updated LUCENE-9542:
---
Attachment: 0001-LUCENE-9542-Unit-test-to-reproduce-bug.patch

> Score returned in search request is original score and not reranked score
> -
>
> Key: LUCENE-9542
> URL: https://issues.apache.org/jira/browse/LUCENE-9542
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0
>Reporter: Krishan
>Priority: Major
> Attachments: 0001-LUCENE-9542-Unit-test-to-reproduce-bug.patch
>
>
> Score returned in search request is original score and not reranked score 
> post the changes in https://issues.apache.org/jira/browse/LUCENE-8412.
> Commit - 
> [https://github.com/apache/lucene-solr/commit/55bfadbce115a825a75686fe0bfe71406bc3ee44#diff-4e354f104ed52bd7f620b0c05ae8467d]
> Specifically - 
> if (cmd.getSort() != null && query instanceof RankQuery == false && 
> (cmd.getFlags() & GET_SCORES) != 0) {
>     TopFieldCollector.populateScores(topDocs.scoreDocs, this, query);
> }
> in SolrIndexSearcher.java recomputes the score but outputs only the original 
> score and not the reranked score.
>  
> The issue is cmd.getQuery() is a type of RankQuery but the "query" variable 
> is a boolean query and probably replacing query with cmd.getQuery() should be 
> the right fix for this so that the score is not overriden for rerank queries
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #2038: SOLR-14955: Add env var options to Prometheus Export scripts.

2020-10-29 Thread GitBox


janhoy commented on a change in pull request #2038:
URL: https://github.com/apache/lucene-solr/pull/2038#discussion_r514559651



##
File path: solr/contrib/prometheus-exporter/bin/solr-exporter
##
@@ -135,15 +134,42 @@ if $cygwin; then
   [ -n "$REPO" ] && REPO=`cygpath --path --windows "$REPO"`
 fi
 
+# Convert Environment Variables to Command Line Options
+EXPORTER_ARGS=()
+
+if [[ -n "$CONFIG_FILE" ]]; then
+  EXPORTER_ARGS+=(-f "$CONFIG_FILE")
+fi
+
+if [[ -n "$PORT" ]]; then

Review comment:
   For a docker env it's not a problem, and for most servers you'll have a 
separate service user with clean shell. Developers could unintentionally have 
the PORT var set on their laptop, but I don't think we should make design 
decisions from that.
   
   On a side note, I tested what would happen if you have both env `$PORT=` 
and arg `-p `, and the last arg will win, thus you can override any env 
with explicit command arg, which I guess is a good thing.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14837) prometheus-exporter: different metrics ports publishes mixed metrics

2020-10-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223202#comment-17223202
 ] 

Jan Høydahl commented on SOLR-14837:


Please supply a pull request with a fix, then we'll QA and merge the fix.

> prometheus-exporter: different metrics ports publishes mixed metrics
> 
>
> Key: SOLR-14837
> URL: https://issues.apache.org/jira/browse/SOLR-14837
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 8.6.2
>Reporter: Fadi Mohsen
>Priority: Minor
>
> when calling SolrExporter.main "pro-grammatically"/"same JVM" with two 
> different solr masters asking to publish the metrics on two different ports, 
> the metrics are being mixed on both metric endpoints from the two solr 
> masters.
> This was tracked down to a static variable called *defaultRegistry*:
> https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/exporter/SolrExporter.java#L86
> removing the static keyword fixes the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14837) prometheus-exporter: different metrics ports publishes mixed metrics

2020-10-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-14837:
---
Component/s: contrib - prometheus-exporter

> prometheus-exporter: different metrics ports publishes mixed metrics
> 
>
> Key: SOLR-14837
> URL: https://issues.apache.org/jira/browse/SOLR-14837
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - prometheus-exporter
>Affects Versions: 8.6.2
>Reporter: Fadi Mohsen
>Priority: Minor
>
> when calling SolrExporter.main "pro-grammatically"/"same JVM" with two 
> different solr masters asking to publish the metrics on two different ports, 
> the metrics are being mixed on both metric endpoints from the two solr 
> masters.
> This was tracked down to a static variable called *defaultRegistry*:
> https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/exporter/SolrExporter.java#L86
> removing the static keyword fixes the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14972) Change default port of prometheus exporter

2020-10-29 Thread Jira
Jan Høydahl created SOLR-14972:
--

 Summary: Change default port of prometheus exporter
 Key: SOLR-14972
 URL: https://issues.apache.org/jira/browse/SOLR-14972
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: contrib - prometheus-exporter
Reporter: Jan Høydahl


The default port of prometheus exporter is 9983, which is exactly the same port 
as the embedded Zookeeper (8983+1000), which would prevent someone from testing 
both on their laptop with default settings.

Use e.g. 8989 instead



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #2038: SOLR-14955: Add env var options to Prometheus Export scripts.

2020-10-29 Thread GitBox


janhoy commented on a change in pull request #2038:
URL: https://github.com/apache/lucene-solr/pull/2038#discussion_r514563497



##
File path: 
solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
##
@@ -81,34 +81,44 @@ $ ./bin/solr-exporter -p 9854 -z localhost:2181/solr -f 
./conf/solr-exporter-con
 
 === Command Line Parameters
 
-The parameters in the example start commands shown above:
+The list of available parameters for the Prometheus Exporter.
+All parameters can be provided via an environment variable, instead of through 
the command line.
 
 `h`, `--help`::
 Displays command line help and usage.
 
-`-p`, `--port`::
-The port where Prometheus will listen for new data. This port will be used to 
configure Prometheus. It can be any port not already in use on your server. The 
default is `9983`.
+`-p`, `--port`, `$PORT`::
+The port where Prometheus will listen for new data. This port will be used to 
configure Prometheus.
+It can be any port not already in use on your server. The default is `9983`.

Review comment:
   https://issues.apache.org/jira/browse/SOLR-14972





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9542) Score returned in search request is original score and not reranked score

2020-10-29 Thread Jason Baik (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223207#comment-17223207
 ] 

Jason Baik commented on LUCENE-9542:


[^0001-LUCENE-9542-Unit-test-to-reproduce-bug.patch] contains a unit test that 
reproduces the bug.

We're experiencing the same problem after migrating from Solr 6 to Solr 8. 
[~krishan1390]'s suggestion of changing query -> cmd.getQuery() works, but for 
the longer term, it might be better to add some API that indicates whether the 
top collector has already computed the scores. That way, the lazy score 
computation. which LUCENE-8412 introduced AFTER the top doc collection stage, 
can more reliably decide if it should compute and overwrite the scores.

In other words, instead of this:
{code:java}
TopDocs topDocs = topCollector.topDocs(0, len);
if (cmd.getSort() != null && query instanceof RankQuery == false && 
(cmd.getFlags() & GET_SCORES) != 0) {
  TopFieldCollector.populateScores(topDocs.scoreDocs, this, query);
} {code}
Do something like this:
{code:java}
TopDocs topDocs = topCollector.topDocs(0, len);
if (cmd.getSort() != null && !topCollector.hasAlreadyComputedScores() && 
(cmd.getFlags() & GET_SCORES) != 0) {
  TopFieldCollector.populateScores(topDocs.scoreDocs, this, query);
}
 {code}
Although a counter-argument to this generic solution could be that this problem 
is currently limited to ReRankQuery's, which is unique in the sense that its 
score modifying behavior is implemented in the top doc collector layer, as 
opposed to in the Weight/Scorer layer since it wants to do the score 
modification for only a subset of the matched docs.

 

> Score returned in search request is original score and not reranked score
> -
>
> Key: LUCENE-9542
> URL: https://issues.apache.org/jira/browse/LUCENE-9542
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0
>Reporter: Krishan
>Priority: Major
> Attachments: 0001-LUCENE-9542-Unit-test-to-reproduce-bug.patch
>
>
> Score returned in search request is original score and not reranked score 
> post the changes in https://issues.apache.org/jira/browse/LUCENE-8412.
> Commit - 
> [https://github.com/apache/lucene-solr/commit/55bfadbce115a825a75686fe0bfe71406bc3ee44#diff-4e354f104ed52bd7f620b0c05ae8467d]
> Specifically - 
> if (cmd.getSort() != null && query instanceof RankQuery == false && 
> (cmd.getFlags() & GET_SCORES) != 0) {
>     TopFieldCollector.populateScores(topDocs.scoreDocs, this, query);
> }
> in SolrIndexSearcher.java recomputes the score but outputs only the original 
> score and not the reranked score.
>  
> The issue is cmd.getQuery() is a type of RankQuery but the "query" variable 
> is a boolean query and probably replacing query with cmd.getQuery() should be 
> the right fix for this so that the score is not overriden for rerank queries
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14972) Change default port of prometheus exporter

2020-10-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-14972:
--

Assignee: Jan Høydahl

> Change default port of prometheus exporter
> --
>
> Key: SOLR-14972
> URL: https://issues.apache.org/jira/browse/SOLR-14972
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - prometheus-exporter
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default port of prometheus exporter is 9983, which is exactly the same 
> port as the embedded Zookeeper (8983+1000), which would prevent someone from 
> testing both on their laptop with default settings.
> Use e.g. 8989 instead



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy opened a new pull request #2046: SOLR-14972: Change default port of prometheus exporter to 8989

2020-10-29 Thread GitBox


janhoy opened a new pull request #2046:
URL: https://github.com/apache/lucene-solr/pull/2046


   See https://issues.apache.org/jira/browse/SOLR-14972



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14925) CVE-2020-13957: The checks added to unauthenticated configset uploads can be circumvented

2020-10-29 Thread Rakesh Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223220#comment-17223220
 ] 

Rakesh Sharma commented on SOLR-14925:
--

[~tflobbe]

 
{code:java}
 The checks in place to prevent such features can be circumvented by using a 
combination of UPLOAD/CREATE actions.
{code}
Tomas, can you please outline how it can be circumvented using a combination of 
UPLOAD/CREATE actions. Thank you.

> CVE-2020-13957: The checks added to unauthenticated configset uploads can be 
> circumvented
> -
>
> Key: SOLR-14925
> URL: https://issues.apache.org/jira/browse/SOLR-14925
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 7.0, 
> 7.0.1, 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 7.7.2, 8.0, 
> 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.6, 8.5.1, 8.5.2, 
> 8.6.1, 8.6.2
>Reporter: Tomas Eduardo Fernandez Lobbe
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Major
> Fix For: master (9.0), 8.7, 8.6.3
>
>
> Severity: High
> Vendor: The Apache Software Foundation
> Versions Affected:
> 6.6.0 to 6.6.5
> 7.0.0 to 7.7.3
> 8.0.0 to 8.6.2
> Description:
> Solr prevents some features considered dangerous (which could be used for 
> remote code execution) to be configured in a ConfigSet that's uploaded via 
> API without authentication/authorization. The checks in place to prevent such 
> features can be circumvented by using a combination of UPLOAD/CREATE actions.
> Mitigation:
> Any of the following are enough to prevent this vulnerability:
> * Disable UPLOAD command in ConfigSets API if not used by setting the system 
> property: {{configset.upload.enabled}} to {{false}} [1]
> * Use Authentication/Authorization and make sure unknown requests aren't 
> allowed [2]
> * Upgrade to Solr 8.6.3 or greater.
> * If upgrading is not an option, consider applying the patch in SOLR-14663 
> ([3])
> * No Solr API, including the Admin UI, is designed to be exposed to 
> non-trusted parties. Tune your firewall so that only trusted computers and 
> people are allowed access
> Credit:
> Tomás Fernández Löbbe, András Salamon
> References:
> [1] https://lucene.apache.org/solr/guide/8_6/configsets-api.html
> [2] 
> https://lucene.apache.org/solr/guide/8_6/authentication-and-authorization-plugins.html
> [3] https://issues.apache.org/jira/browse/SOLR-14663
> [4] https://issues.apache.org/jira/browse/SOLR-14925
> [5] https://wiki.apache.org/solr/SolrSecurity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] HoustonPutman commented on a change in pull request #2038: SOLR-14955: Add env var options to Prometheus Export scripts.

2020-10-29 Thread GitBox


HoustonPutman commented on a change in pull request #2038:
URL: https://github.com/apache/lucene-solr/pull/2038#discussion_r514591539



##
File path: solr/contrib/prometheus-exporter/bin/solr-exporter
##
@@ -135,15 +134,42 @@ if $cygwin; then
   [ -n "$REPO" ] && REPO=`cygpath --path --windows "$REPO"`
 fi
 
+# Convert Environment Variables to Command Line Options
+EXPORTER_ARGS=()
+
+if [[ -n "$CONFIG_FILE" ]]; then
+  EXPORTER_ARGS+=(-f "$CONFIG_FILE")
+fi
+
+if [[ -n "$PORT" ]]; then

Review comment:
   That is good to know. In that case we should probably stick with 
`$PORT`, as there's an easy fix in the case where someone is affected by it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on pull request #2046: SOLR-14972: Change default port of prometheus exporter to 8989

2020-10-29 Thread GitBox


janhoy commented on pull request #2046:
URL: https://github.com/apache/lucene-solr/pull/2046#issuecomment-719048906


   > Looks good to me. Would probably be good to have a mention of it in the 
upgrade notes.
   
   I think that is covered by the `major-changes-in-solr-9.adoc` edit, as the 
`solr-upgrade-notes.adoc` says:
   
   // DEVS: please put 9.0 Upgrade Notes in `major-changes-in-solr-9.adoc`!.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] HoustonPutman commented on pull request #2046: SOLR-14972: Change default port of prometheus exporter to 8989

2020-10-29 Thread GitBox


HoustonPutman commented on pull request #2046:
URL: https://github.com/apache/lucene-solr/pull/2046#issuecomment-719053217


   So sorry, I think I glanced over it. You should be good to go!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] HoustonPutman edited a comment on pull request #2046: SOLR-14972: Change default port of prometheus exporter to 8989

2020-10-29 Thread GitBox


HoustonPutman edited a comment on pull request #2046:
URL: https://github.com/apache/lucene-solr/pull/2046#issuecomment-719053217


   So sorry, I think I glanced over that edit. You should be good to go!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14972) Change default port of prometheus exporter

2020-10-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223247#comment-17223247
 ] 

Jan Høydahl commented on SOLR-14972:


I intend to commit this on Monday (9.0 only)

> Change default port of prometheus exporter
> --
>
> Key: SOLR-14972
> URL: https://issues.apache.org/jira/browse/SOLR-14972
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - prometheus-exporter
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The default port of prometheus exporter is 9983, which is exactly the same 
> port as the embedded Zookeeper (8983+1000), which would prevent someone from 
> testing both on their laptop with default settings.
> Use e.g. 8989 instead



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9592) TestVectorUtil can fail with assertion error

2020-10-29 Thread Julie Tibshirani (Jira)
Julie Tibshirani created LUCENE-9592:


 Summary: TestVectorUtil can fail with assertion error
 Key: LUCENE-9592
 URL: https://issues.apache.org/jira/browse/LUCENE-9592
 Project: Lucene - Core
  Issue Type: Test
Reporter: Julie Tibshirani


Example failure:
{code:java}
 java.lang.AssertionError: expected:<35.699527740478516> but 
was:<35.69953918457031>java.lang.AssertionError: expected:<35.699527740478516> 
but was:<35.69953918457031> at 
__randomizedtesting.SeedInfo.seed([305701410F76FAD0:4797D77886281D68]:0) at 
org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.failNotEquals(Assert.java:835) at 
org.junit.Assert.assertEquals(Assert.java:555) at 
org.junit.Assert.assertEquals(Assert.java:685) at 
org.apache.lucene.util.TestVectorUtil.testSelfDotProduct(TestVectorUtil.java:28)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:567){code}
Reproduce line: 
{code:java}
gradlew test --tests TestVectorUtil.testSelfDotProduct 
-Dtests.seed=305701410F76FAD0 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-AE -Dtests.timezone=SystemV/MST7 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8 {code}

Perhaps the vector utility methods should work with doubles instead of floats 
to avoid loss of precision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jtibshirani opened a new pull request #2047: LUCENE-9592: Use doubles in VectorUtil to maintain precision.

2020-10-29 Thread GitBox


jtibshirani opened a new pull request #2047:
URL: https://github.com/apache/lucene-solr/pull/2047


   Currently we use floats throughout the vector computations like dot product.
   This PR switches to doubles to avoid a loss of precision.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jtibshirani commented on a change in pull request #2047: LUCENE-9592: Use doubles in VectorUtil to maintain precision.

2020-10-29 Thread GitBox


jtibshirani commented on a change in pull request #2047:
URL: https://github.com/apache/lucene-solr/pull/2047#discussion_r514636862



##
File path: lucene/core/src/java/org/apache/lucene/util/VectorUtil.java
##
@@ -25,47 +25,22 @@
   private VectorUtil() {
   }
 
-  public static float dotProduct(float[] a, float[] b) {
-float res = 0f;
-/*
- * If length of vector is larger than 8, we use unrolled dot product to 
accelerate the
- * calculation.
- */
-int i;
-for (i = 0; i < a.length % 8; i++) {
-  res += b[i] * a[i];
-}
-if (a.length < 8) {
-  return res;
-}
-float s0 = 0f;
-float s1 = 0f;
-float s2 = 0f;
-float s3 = 0f;
-float s4 = 0f;
-float s5 = 0f;
-float s6 = 0f;
-float s7 = 0f;
-for (; i + 7 < a.length; i += 8) {
-  s0 += b[i] * a[i];
-  s1 += b[i + 1] * a[i + 1];
-  s2 += b[i + 2] * a[i + 2];
-  s3 += b[i + 3] * a[i + 3];
-  s4 += b[i + 4] * a[i + 4];
-  s5 += b[i + 5] * a[i + 5];
-  s6 += b[i + 6] * a[i + 6];
-  s7 += b[i + 7] * a[i + 7];
+  public static double dotProduct(float[] a, float[] b) {

Review comment:
   I thought it could make sense to keep a simple approach for now before 
we have a vector search implementation and benchmarking set-up (in the spirit 
of not optimizing too early) ? There are a few options we may want to try, like 
decoding and taking the dot product at the same time.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14969) Race condition when creating cores leads to NPE in CoreAdmin STATUS

2020-10-29 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223288#comment-17223288
 ] 

Erick Erickson commented on SOLR-14969:
---

It's worse than I thought. Not the fix, just how weird this could get.

I started by writing a test that spawns two threads, each of which creates a 
core with the same name. I was going to try to send the admin status command, 
not particularly easy actually when you don't have a SolrClient readily 
available. A ways into this a lightbulb went off, "hey! How come I'm 
successfully creating two cores with the same name in the first place?".

I can actually get two cores to be _successfully_ created with the same name 
all the time and the same object. What the _actual_ state of Solr is at that 
point IDK, but it can't be right ;). For instance, I get the same object back. 
So if I try to close the cores returned by create, the second close fails 
because it's already been closed.

If I start one thread, wait 500ms (longer than it needs to be) then start the 
second thread, I get the expected error on the second thread. This with the 
current code.

I'll have a PR shortly, I'll run the full suite after the PR is up so 
night-owls have a chance to look at it. I'll push this tomorrow sometime if 
tests pass.

> Race condition when creating cores leads to NPE in CoreAdmin STATUS
> ---
>
> Key: SOLR-14969
> URL: https://issues.apache.org/jira/browse/SOLR-14969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: multicore
>Affects Versions: 8.6, 8.6.3
>Reporter: Andreas Hubold
>Priority: Major
>
> CoreContainer#create does not correctly handle concurrent requests to create 
> the same core. There's a race condition (see also existing TODO comment in 
> the code), and CoreContainer#createFromDescriptor may be called subsequently 
> for the same core name.
> The _second call_ then fails to create an IndexWriter, and exception handling 
> causes an inconsistent CoreContainer state.
> {noformat}
> 2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [   ] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core 
> [blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual 
> machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:95)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367)
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [blueprint_acgqqafsogyc_comments]
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1408)
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1273)
>  ... 47 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1071)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:906)
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1387)
>  ... 48 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2184)
>  at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2308)
>  at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1130)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1012)
>  ... 50 more
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by 
> this virtual machine: 
> /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at 
> org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:139)
>  at 
> org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
>  at 
> org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
>  at 
> org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105)
>  at org.apache.lucene.index.IndexWriter.(IndexWriter.java:785)
>  at 
> org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:126)
>  at 
> org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:261)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrC

[jira] [Updated] (SOLR-14969) Race condition when creating cores with the same name

2020-10-29 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14969:
--
Summary: Race condition when creating cores with the same name  (was: Race 
condition when creating cores leads to NPE in CoreAdmin STATUS)

> Race condition when creating cores with the same name
> -
>
> Key: SOLR-14969
> URL: https://issues.apache.org/jira/browse/SOLR-14969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: multicore
>Affects Versions: 8.6, 8.6.3
>Reporter: Andreas Hubold
>Priority: Major
>
> CoreContainer#create does not correctly handle concurrent requests to create 
> the same core. There's a race condition (see also existing TODO comment in 
> the code), and CoreContainer#createFromDescriptor may be called subsequently 
> for the same core name.
> The _second call_ then fails to create an IndexWriter, and exception handling 
> causes an inconsistent CoreContainer state.
> {noformat}
> 2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [   ] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core 
> [blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual 
> machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:95)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367)
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [blueprint_acgqqafsogyc_comments]
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1408)
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1273)
>  ... 47 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1071)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:906)
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1387)
>  ... 48 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2184)
>  at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2308)
>  at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1130)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1012)
>  ... 50 more
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by 
> this virtual machine: 
> /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at 
> org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:139)
>  at 
> org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
>  at 
> org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
>  at 
> org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105)
>  at org.apache.lucene.index.IndexWriter.(IndexWriter.java:785)
>  at 
> org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:126)
>  at 
> org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:261)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:135)
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2145) 
> {noformat}
> CoreContainer#createFromDescriptor removes the CoreDescriptor when handling 
> this exception. The SolrCore created for the first successful call is still 
> registered in SolrCores.cores, but now there's no corresponding 
> CoreDescriptor for that name anymore.
> This inconsistency leads to subsequent NullPointerExceptions, for example 
> when using CoreAdmin STATUS with the core name: 
> CoreAdminOperation#getCoreStatus first gets the non-null SolrCore 
> (cores.getCore(cname)) but core.getInstancePath() throws an NPE, because the 
> CoreDescriptor is not registered anymore:
> {noformat}
> 2020-10-27 00:29:25.353 INFO  (qtp2029754983-19) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores 
> params={core=blueprint_acgqqafsogyc_comments&action=STATUS&indexInfo=false&wt=javabin&version=2}
>  status=500 QTime=0
> 2020-10-27 00:29:25.353 ERROR (qtp2029754983-19) [   ] o.a.s.s.HttpSolrCall 
> null:org.apache.solr.common.Sol

[jira] [Updated] (SOLR-14969) Prevent creating multiple cores with the same name which leads to instabilities (race condition)

2020-10-29 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14969:
--
Summary: Prevent creating multiple cores with the same name which leads to 
instabilities (race condition)  (was: Race condition when creating cores with 
the same name)

> Prevent creating multiple cores with the same name which leads to 
> instabilities (race condition)
> 
>
> Key: SOLR-14969
> URL: https://issues.apache.org/jira/browse/SOLR-14969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: multicore
>Affects Versions: 8.6, 8.6.3
>Reporter: Andreas Hubold
>Priority: Major
>
> CoreContainer#create does not correctly handle concurrent requests to create 
> the same core. There's a race condition (see also existing TODO comment in 
> the code), and CoreContainer#createFromDescriptor may be called subsequently 
> for the same core name.
> The _second call_ then fails to create an IndexWriter, and exception handling 
> causes an inconsistent CoreContainer state.
> {noformat}
> 2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [   ] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core 
> [blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual 
> machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:95)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367)
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [blueprint_acgqqafsogyc_comments]
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1408)
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1273)
>  ... 47 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1071)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:906)
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1387)
>  ... 48 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2184)
>  at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2308)
>  at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1130)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1012)
>  ... 50 more
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by 
> this virtual machine: 
> /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at 
> org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:139)
>  at 
> org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
>  at 
> org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
>  at 
> org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105)
>  at org.apache.lucene.index.IndexWriter.(IndexWriter.java:785)
>  at 
> org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:126)
>  at 
> org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:261)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:135)
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2145) 
> {noformat}
> CoreContainer#createFromDescriptor removes the CoreDescriptor when handling 
> this exception. The SolrCore created for the first successful call is still 
> registered in SolrCores.cores, but now there's no corresponding 
> CoreDescriptor for that name anymore.
> This inconsistency leads to subsequent NullPointerExceptions, for example 
> when using CoreAdmin STATUS with the core name: 
> CoreAdminOperation#getCoreStatus first gets the non-null SolrCore 
> (cores.getCore(cname)) but core.getInstancePath() throws an NPE, because the 
> CoreDescriptor is not registered anymore:
> {noformat}
> 2020-10-27 00:29:25.353 INFO  (qtp2029754983-19) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores 
> params={core=blueprint_acgqqafsogyc_comments&action=STATUS&indexInfo=false&wt=javabin&version=2}
>  status=500 Q

[jira] [Commented] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-10-29 Thread Alex Klibisz (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223292#comment-17223292
 ] 

Alex Klibisz commented on LUCENE-9378:
--

Using the `Lucene70DocValuesFormat` gets me back to the pre 8.5.x speed. 
(Actually a little faster!)

I'm wondering if there is anything I should know about if/how it would be a bad 
idea to continue using the `Lucene70DocValuesFormat` until this issue is 
resolved?

> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
> Attachments: hotspots-v76x.png, hotspots-v76x.png, hotspots-v76x.png, 
> hotspots-v76x.png, hotspots-v76x.png, hotspots-v77x.png, hotspots-v77x.png, 
> hotspots-v77x.png, hotspots-v77x.png, image-2020-06-12-22-17-30-339.png, 
> image-2020-06-12-22-17-53-961.png, image-2020-06-12-22-18-24-527.png, 
> image-2020-06-12-22-18-48-919.png, snapshot-v77x.nps, snapshot-v77x.nps, 
> snapshot-v77x.nps, snapshots-v76x.nps, snapshots-v76x.nps, snapshots-v76x.nps
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene80DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ErickErickson opened a new pull request #2048: SOLR-14969: Prevent creating multiple cores with the same name which …

2020-10-29 Thread GitBox


ErickErickson opened a new pull request #2048:
URL: https://github.com/apache/lucene-solr/pull/2048


   …leads to instabilities (race condition)
   
   Running check now, what do people think about the approach? Running tests 
now, barring objections I'll push tomorrow sometime (Friday)...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] muse-dev[bot] commented on a change in pull request #2048: SOLR-14969: Prevent creating multiple cores with the same name which …

2020-10-29 Thread GitBox


muse-dev[bot] commented on a change in pull request #2048:
URL: https://github.com/apache/lucene-solr/pull/2048#discussion_r514729998



##
File path: solr/core/src/java/org/apache/solr/core/CoreContainer.java
##
@@ -1252,78 +1252,92 @@ public SolrCore create(String coreName, Map parameters) {
* @param parameters   the core parameters
* @return the newly created core
*/
+  List inFlightCreations = new ArrayList<>(); // See SOLR-14969
   public SolrCore create(String coreName, Path instancePath, Map parameters, boolean newCollection) {
-
-CoreDescriptor cd = new CoreDescriptor(coreName, instancePath, parameters, 
getContainerProperties(), getZkController());
-
-// TODO: There's a race here, isn't there?
-// Since the core descriptor is removed when a core is unloaded, it should 
never be anywhere when a core is created.
-if (getAllCoreNames().contains(coreName)) {
-  log.warn("Creating a core with existing name is not allowed");
-  // TODO: Shouldn't this be a BAD_REQUEST?
-  throw new SolrException(ErrorCode.SERVER_ERROR, "Core with name '" + 
coreName + "' already exists.");
-}
-
-// Validate paths are relative to known locations to avoid path traversal
-assertPathAllowed(cd.getInstanceDir());
-assertPathAllowed(Paths.get(cd.getDataDir()));
-
-boolean preExisitingZkEntry = false;
 try {
-  if (getZkController() != null) {
-if (cd.getCloudDescriptor().getCoreNodeName() == null) {
-  throw new SolrException(ErrorCode.SERVER_ERROR, "coreNodeName 
missing " + parameters.toString());
+  synchronized (inFlightCreations) {
+if (inFlightCreations.contains(coreName)) {
+  String msg = "Already creating a core with name '" + coreName + "', 
call aborted '";
+  log.warn(msg);
+  throw new SolrException(ErrorCode.SERVER_ERROR, msg);
 }
-preExisitingZkEntry = 
getZkController().checkIfCoreNodeNameAlreadyExists(cd);
+inFlightCreations.add(coreName);
+  }
+  CoreDescriptor cd = new CoreDescriptor(coreName, instancePath, 
parameters, getContainerProperties(), getZkController());

Review comment:
   *THREAD_SAFETY_VIOLATION:*  Read/Write race. Non-private method 
`CoreContainer.create(...)` indirectly reads without synchronization from 
`this.zkSys.zkController`. Potentially races with write in method 
`CoreContainer.load()`.
Reporting because this access may occur on a background thread.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9585) Make preserving original token in CompoundWordTokenFilterBase configurable

2020-10-29 Thread Geoffrey Lawson (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223362#comment-17223362
 ] 

Geoffrey Lawson commented on LUCENE-9585:
-

Looking at LUCENE-5620 there is already a discussion about how to handle Token 
filters where the user way want to preserve the original token or not. 
Following the preserve/restore pattern described has issues with 
CompoundWordTokenFilterBase. This filter already preserves the original token 
so we would first need to change that behavior. No longer preserving the 
original token would be a large change in behavior for existing users.

> Make preserving original token in CompoundWordTokenFilterBase configurable
> --
>
> Key: LUCENE-9585
> URL: https://issues.apache.org/jira/browse/LUCENE-9585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 8.5.1
>Reporter: Geoffrey Lawson
>Priority: Minor
>
> When using a subclass of CompoundWordTokenFilterBase the filter will always 
> output the original input token along with the decomposed tokens if there are 
> any. This will result in documents that originally had the compound form to 
> have both the compound and decomposed form while documents that originally 
> had the decomposed form will only have the decomposed form. Only queries in 
> the decomposed forms will match more documents when using this filter.
> If the filter can also be run at query time compound forms can be decomposed 
> and match additional documents. To do this the filter needs to be able to 
> return only the decomposed form if there is a decomposed form. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14969) Prevent creating multiple cores with the same name which leads to instabilities (race condition)

2020-10-29 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-14969:
-

Assignee: Erick Erickson

> Prevent creating multiple cores with the same name which leads to 
> instabilities (race condition)
> 
>
> Key: SOLR-14969
> URL: https://issues.apache.org/jira/browse/SOLR-14969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: multicore
>Affects Versions: 8.6, 8.6.3
>Reporter: Andreas Hubold
>Assignee: Erick Erickson
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CoreContainer#create does not correctly handle concurrent requests to create 
> the same core. There's a race condition (see also existing TODO comment in 
> the code), and CoreContainer#createFromDescriptor may be called subsequently 
> for the same core name.
> The _second call_ then fails to create an IndexWriter, and exception handling 
> causes an inconsistent CoreContainer state.
> {noformat}
> 2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [   ] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core 
> [blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual 
> machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:95)
>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367)
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [blueprint_acgqqafsogyc_comments]
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1408)
>  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1273)
>  ... 47 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1071)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:906)
>  at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1387)
>  ... 48 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2184)
>  at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2308)
>  at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1130)
>  at org.apache.solr.core.SolrCore.(SolrCore.java:1012)
>  ... 50 more
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by 
> this virtual machine: 
> /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock
>  at 
> org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:139)
>  at 
> org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
>  at 
> org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
>  at 
> org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105)
>  at org.apache.lucene.index.IndexWriter.(IndexWriter.java:785)
>  at 
> org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:126)
>  at 
> org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:261)
>  at 
> org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:135)
>  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2145) 
> {noformat}
> CoreContainer#createFromDescriptor removes the CoreDescriptor when handling 
> this exception. The SolrCore created for the first successful call is still 
> registered in SolrCores.cores, but now there's no corresponding 
> CoreDescriptor for that name anymore.
> This inconsistency leads to subsequent NullPointerExceptions, for example 
> when using CoreAdmin STATUS with the core name: 
> CoreAdminOperation#getCoreStatus first gets the non-null SolrCore 
> (cores.getCore(cname)) but core.getInstancePath() throws an NPE, because the 
> CoreDescriptor is not registered anymore:
> {noformat}
> 2020-10-27 00:29:25.353 INFO  (qtp2029754983-19) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores 
> params={core=blueprint_acgqqafsogyc_comments&action=STATUS&indexInfo=false&wt=javabin&version=2}
>  status=500 QTime=0
> 2020-10-27 00:29:25.353 ERROR (qtp2029

[GitHub] [lucene-solr] ErickErickson commented on pull request #2048: SOLR-14969: Prevent creating multiple cores with the same name which …

2020-10-29 Thread GitBox


ErickErickson commented on pull request #2048:
URL: https://github.com/apache/lucene-solr/pull/2048#issuecomment-719147387


   Silly precommit failure, new PR shortly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ErickErickson closed pull request #2048: SOLR-14969: Prevent creating multiple cores with the same name which …

2020-10-29 Thread GitBox


ErickErickson closed pull request #2048:
URL: https://github.com/apache/lucene-solr/pull/2048


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ErickErickson opened a new pull request #2049: SOLR-14969: Prevent creating multiple cores with the same name which leads to instabilities (race condition)

2020-10-29 Thread GitBox


ErickErickson opened a new pull request #2049:
URL: https://github.com/apache/lucene-solr/pull/2049


   Fixed ecjLint failure. All tests pass on master.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-site] tflobbe merged pull request #31: Add CVE-2020-13957 page

2020-10-29 Thread GitBox


tflobbe merged pull request #31:
URL: https://github.com/apache/lucene-site/pull/31


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-site] tflobbe opened a new pull request #32: Publish: Add CVE-2020-13957 page (#31)

2020-10-29 Thread GitBox


tflobbe opened a new pull request #32:
URL: https://github.com/apache/lucene-site/pull/32


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] muse-dev[bot] commented on a change in pull request #2049: SOLR-14969: Prevent creating multiple cores with the same name which leads to instabilities (race condition)

2020-10-29 Thread GitBox


muse-dev[bot] commented on a change in pull request #2049:
URL: https://github.com/apache/lucene-solr/pull/2049#discussion_r514854533



##
File path: solr/core/src/java/org/apache/solr/core/CoreContainer.java
##
@@ -1253,77 +1254,90 @@ public SolrCore create(String coreName, Map parameters) {
* @return the newly created core
*/
   public SolrCore create(String coreName, Path instancePath, Map parameters, boolean newCollection) {
-
-CoreDescriptor cd = new CoreDescriptor(coreName, instancePath, parameters, 
getContainerProperties(), getZkController());
-
-// TODO: There's a race here, isn't there?
-// Since the core descriptor is removed when a core is unloaded, it should 
never be anywhere when a core is created.
-if (getAllCoreNames().contains(coreName)) {
-  log.warn("Creating a core with existing name is not allowed");
-  // TODO: Shouldn't this be a BAD_REQUEST?
-  throw new SolrException(ErrorCode.SERVER_ERROR, "Core with name '" + 
coreName + "' already exists.");
-}
-
-// Validate paths are relative to known locations to avoid path traversal
-assertPathAllowed(cd.getInstanceDir());
-assertPathAllowed(Paths.get(cd.getDataDir()));
-
-boolean preExisitingZkEntry = false;
 try {
-  if (getZkController() != null) {
-if (cd.getCloudDescriptor().getCoreNodeName() == null) {
-  throw new SolrException(ErrorCode.SERVER_ERROR, "coreNodeName 
missing " + parameters.toString());
+  synchronized (inFlightCreations) {
+if (inFlightCreations.contains(coreName)) {
+  String msg = "Already creating a core with name '" + coreName + "', 
call aborted '";
+  log.warn(msg);
+  throw new SolrException(ErrorCode.SERVER_ERROR, msg);
 }
-preExisitingZkEntry = 
getZkController().checkIfCoreNodeNameAlreadyExists(cd);
+inFlightCreations.add(coreName);
+  }
+  CoreDescriptor cd = new CoreDescriptor(coreName, instancePath, 
parameters, getContainerProperties(), getZkController());

Review comment:
   *THREAD_SAFETY_VIOLATION:*  Read/Write race. Non-private method 
`CoreContainer.create(...)` indirectly reads without synchronization from 
`this.zkSys.zkController`. Potentially races with write in method 
`CoreContainer.load()`.
Reporting because this access may occur on a background thread.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-site] tflobbe commented on pull request #32: Publish: Add CVE-2020-13957 page (#31)

2020-10-29 Thread GitBox


tflobbe commented on pull request #32:
URL: https://github.com/apache/lucene-site/pull/32#issuecomment-719169378


   Since this included David's changes and the UI only lets me squash, I pushed 
from command line.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-site] tflobbe closed pull request #32: Publish: Add CVE-2020-13957 page (#31)

2020-10-29 Thread GitBox


tflobbe closed pull request #32:
URL: https://github.com/apache/lucene-site/pull/32


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14034) remove deprecated min_rf references

2020-10-29 Thread Tim Dillon (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223427#comment-17223427
 ] 

Tim Dillon commented on SOLR-14034:
---

[~marcussorealheis] I see it has been a while since there was any activity on 
this, is it still available to work on? I'm new here as well and this seems 
like a good place to get started.

> remove deprecated min_rf references
> ---
>
> Key: SOLR-14034
> URL: https://issues.apache.org/jira/browse/SOLR-14034
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Priority: Blocker
>  Labels: newdev
> Fix For: master (9.0)
>
>
> * {{min_rf}} support was added under SOLR-5468 in version 4.9 
> (https://github.com/apache/lucene-solr/blob/releases/lucene-solr/4.9.0/solr/solrj/src/java/org/apache/solr/client/solrj/request/UpdateRequest.java#L50)
>  and deprecated under SOLR-12767 in version 7.6 
> (https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.6.0/solr/solrj/src/java/org/apache/solr/client/solrj/request/UpdateRequest.java#L57-L61)
> * http://lucene.apache.org/solr/7_6_0/changes/Changes.html and 
> https://lucene.apache.org/solr/guide/8_0/major-changes-in-solr-8.html#solr-7-6
>  both clearly mention the deprecation
> This ticket is to fully remove {{min_rf}} references in code, tests and 
> documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] CaoManhDat merged pull request #2026: LUCENE-8626: Standardize Lucene Test Files

2020-10-29 Thread GitBox


CaoManhDat merged pull request #2026:
URL: https://github.com/apache/lucene-solr/pull/2026


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8626) standardise test class naming

2020-10-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223444#comment-17223444
 ] 

ASF subversion and git services commented on LUCENE-8626:
-

Commit 57729c9acaace26f37644c42a0b0889508e589ba in lucene-solr's branch 
refs/heads/master from Marcus
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=57729c9 ]

LUCENE-8626: Standardize Lucene Test Files (#2026)



> standardise test class naming
> -
>
> Key: LUCENE-8626
> URL: https://issues.apache.org/jira/browse/LUCENE-8626
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-12939.01.patch, SOLR-12939.02.patch, 
> SOLR-12939.03.patch, SOLR-12939_hoss_validation_groovy_experiment.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This was mentioned and proposed on the dev mailing list. Starting this ticket 
> here to start to make it happen?
> History: This ticket was created as 
> https://issues.apache.org/jira/browse/SOLR-12939 ticket and then got 
> JIRA-moved to become https://issues.apache.org/jira/browse/LUCENE-8626 ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] CaoManhDat edited a comment on pull request #2026: LUCENE-8626: Standardize Lucene Test Files

2020-10-29 Thread GitBox


CaoManhDat edited a comment on pull request #2026:
URL: https://github.com/apache/lucene-solr/pull/2026#issuecomment-719286855


   Thank you @MarcusSorealheis and everyone



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] CaoManhDat commented on pull request #2026: LUCENE-8626: Standardize Lucene Test Files

2020-10-29 Thread GitBox


CaoManhDat commented on pull request #2026:
URL: https://github.com/apache/lucene-solr/pull/2026#issuecomment-719286855


   Thank you @MarcusSorealheis 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org