[jira] [Commented] (LUCENE-8951) Create issues@ and builds@ lists and update notifications

2019-08-28 Thread Shawn Heisey (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917564#comment-16917564
 ] 

Shawn Heisey commented on LUCENE-8951:
--

I didn't see anything come in.  Will they sign up my @apache.org email address?

> Create issues@ and builds@ lists and update notifications
> -
>
> Key: LUCENE-8951
> URL: https://issues.apache.org/jira/browse/LUCENE-8951
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> Issue to plan and execute decision from dev mailing list 
> [https://lists.apache.org/thread.html/762d72a9045642dc488dc7a2fd0a525707e5fa5671ac0648a3604c9b@%3Cdev.lucene.apache.org%3E]
>  # Create mailing lists as an announce only list (/)
>  # Subscribe all emails that will be allowed to post (/)
>  # Update websites with info about the new lists
>  # Announce to dev@ list that the change will happen
>  # Modify Jira and Github bots to post to issues@ list instead of dev@
>  # Modify Jenkins (including Policeman and other) to post to builds@
>  # Announce to dev@ list that the change is effective



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13719) SolrClient.ping() in 8.2, using SolrJ

2019-08-26 Thread Shawn Heisey (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916061#comment-16916061
 ] 

Shawn Heisey commented on SOLR-13719:
-

Looks like we have an oversight in SolrClient and its children - there should 
be a ping method that takes a collection name.  Variants like this exist for 
most of the functionality provided by SolrClient.

I can think of two workarounds until that happens:

1) The setDefaultCollection method, which should work quite well for 
deployments with one collection.  I do not see any evidence that this method is 
slated for removal.
2) Code like this, which directs to a specific collection:
{noformat}
SolrPingResponse rsp = new SolrPing().process(client, collection);
{noformat}


> SolrClient.ping() in 8.2, using SolrJ
> -
>
> Key: SOLR-13719
> URL: https://issues.apache.org/jira/browse/SOLR-13719
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.2
> Environment: linux mint 19,  java8
>Reporter: Benjamin Wade Friedman
>Priority: Trivial
>  Labels: beginner, easyfix, newbie
>
> {color:#22}I started a local SolrCloud instance with two nodes and two 
> replicas per node.  I created one empty collection on each node.  So I guess 
> I have two shard per collection. 
> {color}
>  
> I tried to use the ping method in Solrj to verify my connected client.  When 
> I try to use it, it throws ...
>  
> Caused by: org.apache.solr.common.SolrException: No collection param 
> specified on request and no default collection has been set: []
> at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1071)
>  ~[solr-solrj-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - 
> ivera - 2019-07-19 15:11:07]
>  
> I cannot pass a collection name to the ping request.  And the 
> CloudSolrClient.Builder does not allow me to declare a default collection.  
> {color:#22}BaseCloudSolrClient.setDefault{color}{color:#22}{color:#22}Collection(String)
>  is effectively deprecated because CloudSolrClient no longer has a public 
> constructor.  {color}{color}
>  
> {color:#22}{color:#22}Can we add an argument to the Builder 
> constructor to accept a string for the default collection?  Or a new setter 
> on the Builder? {color}{color}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13464) no way for external clients to detect when changes to security config have taken effect

2019-08-12 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905705#comment-16905705
 ] 

Shawn Heisey commented on SOLR-13464:
-

I'm thinking maybe if the security.json had a place for a serial number, 
similar to DNS, then the auth API could indicate the serial number of config 
that has been fully initialized.   When the config is changed via the API, 
advance the serial number, and recommend in the docs that users do the same 
when making changes themselves.

Or is that just ugly?


> no way for external clients to detect when changes to security config have 
> taken effect
> ---
>
> Key: SOLR-13464
> URL: https://issues.apache.org/jira/browse/SOLR-13464
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>Priority: Major
>
> The basic functionality of the authorization/authentication REST APIs works 
> by persisting changes to a {{security.json}} file in ZooKeeper which is 
> monitored by every node via a Watcher.  When the watchers fire, the affected 
> plugin types are (re)-initialized ith the new settings.
> Since this information is "pulled" from ZK by the nodes, there is a (small) 
> inherent delay between when the REST API is hit by external clients, and when 
> each node learns of the changes.  An additional delay exists as the config is 
> "reloaded" to (re)initialize the plugins.
> Practically speaking these delays have very little impact on a "real" solr 
> cloud cluster, but they can be problematic in test cases -- while the 
> SecurityConfHandler on each node could be used to query the "current" 
> security.json file, it doesn't indicate if/when the plugins identified in the 
> "current" configuration are fully in use.
> For now, we have a "white box" work around available for MiniSolrCloudCluster 
> based tests by comparing the Plugins of each CoreContainer in use before and 
> after making known changes via the API (see commits identified below).
> This issue exists as a placeholder for future consideration of UX/API 
> improvements making it easier for external clients (w/o "white box" access to 
> solr internals) to know definitively if/when modified security settings take 
> effect.
> {panel:title=original jira description}
> I've been investigating some sporadic and hard to reproduce test failures 
> related to authentication in cloud mode, and i *think* (but have not directly 
> verified) that the common cause is that after uses one of the 
> {{/admin/auth...}} handlers to update some setting, there is an inherient and 
> unpredictible delay (due to ZK watches) until every node in the cluster has 
> had a chance to (re)load the new configuration and initialize the various 
> security plugins with the new settings.
> Which means, if a test client does a POST to some node to add/change/remove 
> some authn/authz settings, and then immediately hits the exact same node (or 
> any other node) to test that the effects of those settings exist, there is no 
> garuntee that they will have taken affect yet.
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-02 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898983#comment-16898983
 ] 

Shawn Heisey commented on SOLR-13672:
-

I was just noticing in the screenshot that our error message says the problem 
was with the 'ruok' command.  If it's actually the 'conf' command that's 
failing, maybe the error message needs a little improving.


> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-02 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898980#comment-16898980
 ] 

Shawn Heisey commented on SOLR-13672:
-

I checked the ZK server code for parsing a config file. That code treats it as 
a properties file.  We might want to do the same.


> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-02 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-13672:

Attachment: SOLR-13672.patch

> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
> Attachments: SOLR-13672.patch
>
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-02 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898853#comment-16898853
 ] 

Shawn Heisey commented on SOLR-13672:
-

I think I would call this a bug in ZK.  But since getting a fix from them could 
take a long time, we need to tackle this in Solr.

There are probably two ways to handle this.  1) Look for the = separator, and 
if not found, use the : separator.  2) Treat the conf output as a .properties 
file and let Java parse it for us.

I'm attaching a patch that takes the first approach.


> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13665) Connecting to ZK on SSL port (secureClient: ClassNotDef found error)

2019-07-30 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896471#comment-16896471
 ] 

Shawn Heisey commented on SOLR-13665:
-

Yes, the netty jar probably needs to be added to solrj-libs as well as the 
webapp.  I wonder if the SSL functionality that ZK needs can be satisfied by a 
smaller jar than netty-all.  The Solr download is already quite large.  The 
specific class that was missing in the error above is in the netty-transport 
jar, but I have no idea whether that jar contains everything that ZK needs for 
SSL:

https://mvnrepository.com/artifact/io.netty/netty-transport/4.1.38.Final


> Connecting to ZK on SSL port (secureClient: ClassNotDef found error)
> 
>
> Key: SOLR-13665
> URL: https://issues.apache.org/jira/browse/SOLR-13665
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
>
>  
>  I managed to setup Zookeeper 3.5.5 with secure Client enabled and configured 
> in solr.in.sh the zookeeper properties to use that port, which offers SSL.
> However, I see the following error in the logfiles when starting up Solr:
> 2019-07-30 14:59:09.704 INFO  (main) [   ] o.a.z.c.X509Util Setting -D 
> jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated 
> TLS renegotiation
>  2019-07-30 14:59:09.710 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could 
> not start Solr. Check solr/home property and the logs
>  2019-07-30 14:59:09.743 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NoClassDefFoundError: io/netty/channel/ChannelHandler
>      at java.base/java.lang.Class.forName0(Native Method)
>      at java.base/java.lang.Class.forName(Class.java:315)
>      at 
> org.apache.zookeeper.ZooKeeper.getClientCnxnSocket(ZooKeeper.java:3063)
>      at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:883)
>      at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:801)
>      at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:950)
>      at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:688)
>      at 
> org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:43)
>      at 
> org.apache.solr.common.cloud.ZkClientConnectionStrategy.createSolrZooKeeper(ZkClientConnectionStrategy.java:105)
>      at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.connect(DefaultConnectionStrategy.java:37)
>      at 
> org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:166)
>      at 
> org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:125)
>      at 
> org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:120)
>      at 
> org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:107)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.loadNodeConfig(SolrDispatchFilter.java:282)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:259)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:181)
>      at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:136)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:750)
>      at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>      at 
> java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
>      at 
> java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
>      at 
> java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
>      at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:369)
>      at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
>      at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:854)
>      at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:278)
>      at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
>      at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>      at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
>      at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:192)
>      at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:510)
>      at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.

[jira] [Commented] (SOLR-13665) Connecting to ZK on SSL port (secureClient: ClassNotDef found error)

2019-07-30 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896411#comment-16896411
 ] 

Shawn Heisey commented on SOLR-13665:
-

The ZK SSL guide says that Netty is required for SSL.

https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide

Solr has a dependency on netty for tests, but not for anything else -- it's not 
in the binary download.

This requirement probably was not known when we upgraded ZK to 3.5.5.  Manually 
adding the netty jar to WEB-INF/lib would most likely fix the problem.  I can't 
guarantee that, but I think it should work.

The binary ZK 3.5.5 download contains a file named netty-all-4.1.29.Final.jar 
that should work.

> Connecting to ZK on SSL port (secureClient: ClassNotDef found error)
> 
>
> Key: SOLR-13665
> URL: https://issues.apache.org/jira/browse/SOLR-13665
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
>
>  
>  I managed to setup Zookeeper 3.5.5 with secure Client enabled and configured 
> in solr.in.sh the zookeeper properties to use that port, which offers SSL.
> However, I see the following error in the logfiles when starting up Solr:
> 2019-07-30 14:59:09.704 INFO  (main) [   ] o.a.z.c.X509Util Setting -D 
> jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated 
> TLS renegotiation
>  2019-07-30 14:59:09.710 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could 
> not start Solr. Check solr/home property and the logs
>  2019-07-30 14:59:09.743 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NoClassDefFoundError: io/netty/channel/ChannelHandler
>      at java.base/java.lang.Class.forName0(Native Method)
>      at java.base/java.lang.Class.forName(Class.java:315)
>      at 
> org.apache.zookeeper.ZooKeeper.getClientCnxnSocket(ZooKeeper.java:3063)
>      at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:883)
>      at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:801)
>      at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:950)
>      at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:688)
>      at 
> org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:43)
>      at 
> org.apache.solr.common.cloud.ZkClientConnectionStrategy.createSolrZooKeeper(ZkClientConnectionStrategy.java:105)
>      at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.connect(DefaultConnectionStrategy.java:37)
>      at 
> org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:166)
>      at 
> org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:125)
>      at 
> org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:120)
>      at 
> org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:107)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.loadNodeConfig(SolrDispatchFilter.java:282)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:259)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:181)
>      at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:136)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:750)
>      at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>      at 
> java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
>      at 
> java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
>      at 
> java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
>      at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:369)
>      at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
>      at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:854)
>      at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:278)
>      at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
>      at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>      at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
>      at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:192)
>      at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:510)
>      

[jira] [Assigned] (SOLR-8346) Upgrade Zookeeper to version 3.5.5

2019-07-19 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey reassigned SOLR-8346:
--

Assignee: Erick Erickson  (was: Shawn Heisey)

> Upgrade Zookeeper to version 3.5.5
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
>Priority: Major
>  Labels: security, zookeeper
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-8346.patch, SOLR-8346.patch, SOLR-8346.patch, 
> SOLR-8346.patch, SOLR-8346.patch, SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. --Currently a 3.5.4-beta is released (2018-05-17).-- 
> Version 3.5.5 was released 2019-05-20



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8346) Upgrade Zookeeper to version 3.5.5

2019-07-19 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1677#comment-1677
 ] 

Shawn Heisey commented on SOLR-8346:


[~michelwigbers] This issue is about upgrading the ZK clent code built into 
Solr to 3.5.x.  You're running a Solr version with the 3.4.x client against 
3.5.x servers.

The most likely reason for the exception you're seeing is that your ZK servers 
are not allowing Solr to use "four letter word" commands.  The 3.5.x version of 
the ZK server disallows all 4lw commands by default.  You will need this in 
your ZK server config:

4lw.commands.whitelist=mntr,conf,ruok


> Upgrade Zookeeper to version 3.5.5
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Assignee: Shawn Heisey
>Priority: Major
>  Labels: security, zookeeper
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-8346.patch, SOLR-8346.patch, SOLR-8346.patch, 
> SOLR-8346.patch, SOLR-8346.patch, SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. --Currently a 3.5.4-beta is released (2018-05-17).-- 
> Version 3.5.5 was released 2019-05-20



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8346) Upgrade Zookeeper to version 3.5.5

2019-07-19 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey reassigned SOLR-8346:
--

Assignee: Shawn Heisey  (was: Erick Erickson)

> Upgrade Zookeeper to version 3.5.5
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Assignee: Shawn Heisey
>Priority: Major
>  Labels: security, zookeeper
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-8346.patch, SOLR-8346.patch, SOLR-8346.patch, 
> SOLR-8346.patch, SOLR-8346.patch, SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. --Currently a 3.5.4-beta is released (2018-05-17).-- 
> Version 3.5.5 was released 2019-05-20



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Moved] (LUCENE-8926) Test2BDocs.test2BDocs error java.lang.ArrayIndexOutOfBoundsException

2019-07-18 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey moved SOLR-13639 to LUCENE-8926:
-

Affects Version/s: (was: 8.1.1)
   8.1.1
 Security: (was: Public)
Lucene Fields: New
  Key: LUCENE-8926  (was: SOLR-13639)
  Project: Lucene - Core  (was: Solr)

> Test2BDocs.test2BDocs error java.lang.ArrayIndexOutOfBoundsException
> 
>
> Key: LUCENE-8926
> URL: https://issues.apache.org/jira/browse/LUCENE-8926
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.1.1
>Reporter: Daniel Black
>Priority: Major
>
> {noformat}
> HEARTBEAT J2 PID(27364@bobby): 2019-07-16T01:43:37, stalled for 190s at: 
> Test2BPoints.test2D
> 1> indexed: 0
> 1> indexed: 1000
> 1> indexed: 2000
> ...
> 1> indexed: 213000
> 1> indexed: 214000
> 1> verifying...
> 2> NOTE: reproduce with: ant test -Dtestcase=Test2BDocs 
> -Dtests.method=test2BDocs -Dtests.seed=ECB69064FDA0F2A6 -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=zh -Dtests.timezone=Europe/Zurich 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> [00:41:02.330] ERROR 3755s J1 | Test2BDocs.test2BDocs <<<
> > Throwable #1: java.lang.ArrayIndexOutOfBoundsException
> > at __randomizedtesting.SeedInfo.seed([ECB69064FDA0F2A6:2B65388480C84F43]:0)
> > at 
> > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockImpactsEverythingEnum.advance(Lucene50PostingsReader.java:1605)
> > at 
> > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockImpactsEverythingEnum.nextDoc(Lucene50PostingsReader.java:1583)
> > at org.apache.lucene.index.CheckIndex.checkFields(CheckIndex.java:1526)
> > at org.apache.lucene.index.CheckIndex.testPostings(CheckIndex.java:1871)
> > at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:724)
> > at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:280)
> > at 
> > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:862)
> > at org.apache.lucene.index.Test2BDocs.test2BDocs(Test2BDocs.java:127)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at 
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
> > at 
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> > at java.lang.reflect.Method.invoke(Method.java:508)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
> > at 
> > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> > at 
> > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> > at 
> > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> > at 
> > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> > at 
> > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> > at 
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at 
> > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> > at 
> > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> > at 
> > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
> > at 
> > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> > at 
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at 
> > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> > at 
> > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> > at 
> > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesO

[jira] [Moved] (LUCENE-8925) Test2BPostingsBytes org.apache.lucene.index.CorruptIndexException: docs out of order (490879719 <= 490879719 )

2019-07-18 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey moved SOLR-13642 to LUCENE-8925:
-

Affects Version/s: (was: 8.1.1)
   8.1.1
 Security: (was: Public)
Lucene Fields: New
  Key: LUCENE-8925  (was: SOLR-13642)
  Project: Lucene - Core  (was: Solr)

> Test2BPostingsBytes org.apache.lucene.index.CorruptIndexException: docs out 
> of order (490879719 <= 490879719 )
> --
>
> Key: LUCENE-8925
> URL: https://issues.apache.org/jira/browse/LUCENE-8925
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.1.1
> Environment: RHEL-7.3 (ppc64le - Power9)
> kernel 3.10.0-957.21.3.el7.ppc64le
> 48G vm, 64 core
> java version "1.8.0_211"
> Java(TM) SE Runtime Environment (build 8.0.5.37 - 
> pxl6480sr5fp37-20190618_01(SR5 FP37))
> IBM J9 VM (build 2.9, JRE 1.8.0 Linux ppc64le-64-Bit Compressed References 
> 20190617_419755 (JIT enabled, AOT enabled)
> OpenJ9 - 354b31d
> OMR - 0437c69
> IBM - 4972efe)
> JCL - 20190606_01 based on Oracle jdk8u211-b25
>Reporter: Daniel Black
>Priority: Major
>  Labels: test-failure
>
> 8x branch at commit 081e2ef2c05e017e87a2aef2a4f55067fbba5cb4
> while running {{ant   -Dtests.filter=(@monster or @slow) and not(@awaitsfix) 
> -Dtests.heapsize=4G -Dtests.jvms=64 test}}
> {noformat}
>   2> NOTE: reproduce with: ant test  -Dtestcase=Test2BPostingsBytes 
> -Dtests.method=test -Dtests.seed=1C14F78FC0AF1835 -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=fr 
> -Dtests.timezone=SystemV/AST4ADT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> [23:54:00.627] ERROR111s J52 | Test2BPostingsBytes.test <<<
>> Throwable #1: org.apache.lucene.index.CorruptIndexException: docs out of 
> order (490879719 <= 490879719 ) 
> (resource=MockIndexOutputWrapper(FSIndexOutput(path="/home/danielgb
> /lucene-solr/lucene/build/core/test/J52/temp/lucene.index.Test2BPostingsBytes_1C14F78FC0AF1835-001/2BPostingsBytes3-001/_0_Lucene50_0.doc")))
>>  at 
> __randomizedtesting.SeedInfo.seed([1C14F78FC0AF1835:9440C8556E5375CD]:0)
>>  at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236)
>>  at 
> org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148)
>>  at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865)
>>  at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344)
>>  at 
> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>>  at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169)
>>  at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245)
>>  at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140)
>>  at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2988)
>>  at org.apache.lucene.util.TestUtil.addIndexesSlowly(TestUtil.java:990)
>>  at 
> org.apache.lucene.index.Test2BPostingsBytes.test(Test2BPostingsBytes.java:127)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
>>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
>>  at java.lang.reflect.Method.invoke(Method.java:508)
>>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
>>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
>>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
>>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
>>  at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>>  at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>>  at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>>  at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>>  at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>>  at 
> com

[jira] [Commented] (SOLR-13642) Test2BPostingsBytes org.apache.lucene.index.CorruptIndexException: docs out of order (490879719 <= 490879719 )

2019-07-18 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887961#comment-16887961
 ] 

Shawn Heisey commented on SOLR-13642:
-

Test2BPostingsBytes is a Lucene test, not a Solr test.  Which means this report 
is out of place in the SOLR project on Jira.  The test mentioned in SOLR-13639 
is also a Lucene test, not a Solr test.  I will move both issues to the LUCENE 
project.

Java from IBM is known to have bugs when running Lucene.  IBM enables several 
optimizations by default, some of which are not compatible with Lucene code.  
Using OpenJDK or a JDK from Oracle will likely produce better results.


> Test2BPostingsBytes org.apache.lucene.index.CorruptIndexException: docs out 
> of order (490879719 <= 490879719 )
> --
>
> Key: SOLR-13642
> URL: https://issues.apache.org/jira/browse/SOLR-13642
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.1.1
> Environment: RHEL-7.3 (ppc64le - Power9)
> kernel 3.10.0-957.21.3.el7.ppc64le
> 48G vm, 64 core
> java version "1.8.0_211"
> Java(TM) SE Runtime Environment (build 8.0.5.37 - 
> pxl6480sr5fp37-20190618_01(SR5 FP37))
> IBM J9 VM (build 2.9, JRE 1.8.0 Linux ppc64le-64-Bit Compressed References 
> 20190617_419755 (JIT enabled, AOT enabled)
> OpenJ9 - 354b31d
> OMR - 0437c69
> IBM - 4972efe)
> JCL - 20190606_01 based on Oracle jdk8u211-b25
>Reporter: Daniel Black
>Priority: Major
>  Labels: test-failure
>
> 8x branch at commit 081e2ef2c05e017e87a2aef2a4f55067fbba5cb4
> while running {{ant   -Dtests.filter=(@monster or @slow) and not(@awaitsfix) 
> -Dtests.heapsize=4G -Dtests.jvms=64 test}}
> {noformat}
>   2> NOTE: reproduce with: ant test  -Dtestcase=Test2BPostingsBytes 
> -Dtests.method=test -Dtests.seed=1C14F78FC0AF1835 -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=fr 
> -Dtests.timezone=SystemV/AST4ADT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> [23:54:00.627] ERROR111s J52 | Test2BPostingsBytes.test <<<
>> Throwable #1: org.apache.lucene.index.CorruptIndexException: docs out of 
> order (490879719 <= 490879719 ) 
> (resource=MockIndexOutputWrapper(FSIndexOutput(path="/home/danielgb
> /lucene-solr/lucene/build/core/test/J52/temp/lucene.index.Test2BPostingsBytes_1C14F78FC0AF1835-001/2BPostingsBytes3-001/_0_Lucene50_0.doc")))
>>  at 
> __randomizedtesting.SeedInfo.seed([1C14F78FC0AF1835:9440C8556E5375CD]:0)
>>  at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236)
>>  at 
> org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148)
>>  at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865)
>>  at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344)
>>  at 
> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>>  at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169)
>>  at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245)
>>  at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140)
>>  at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2988)
>>  at org.apache.lucene.util.TestUtil.addIndexesSlowly(TestUtil.java:990)
>>  at 
> org.apache.lucene.index.Test2BPostingsBytes.test(Test2BPostingsBytes.java:127)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
>>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
>>  at java.lang.reflect.Method.invoke(Method.java:508)
>>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
>>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
>>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
>>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
>>  at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>>  at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>>  at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>>  at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>  

[jira] [Closed] (SOLR-13620) Apache Solr warning on 6.2.1

2019-07-11 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey closed SOLR-13620.
---

> Apache Solr warning on 6.2.1
> 
>
> Key: SOLR-13620
> URL: https://issues.apache.org/jira/browse/SOLR-13620
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - php
>Affects Versions: 6.2.1
> Environment: Cent OS 6.10, Apache Solr
>Reporter: Midas Nitin
>Priority: Major
>  Labels: performance
> Fix For: 6.2.1
>
>
> Hello,
> We have Apache Solr version 6.2.1 installed on server and we are getting this 
> warning on Apache Solr log from few days which has affected performance of 
> solr queries and put latency on our App:
> SolrCore [user_details] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> So we have followed this article 
> [https://support.datastax.com/hc/en-us/articles/207690673-FAQ-Solr-logging-PERFORMANCE-WARNING-Overlapping-onDeckSearchers-and-its-meaning]
>  and made changes in SolrConfig.xml file of user_details like this:
> 16
> and also we have reduced number of autowarmCount
>             class="solr.search.LRUCache"
>            size="10"
>            initialSize="0"
>            autowarmCount="5"
>            regenerator="solr.NoOpRegenerator" />
> however still we are getting this warning. Can you please help us how can we 
> improve the performance of solr queries on our app.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13620) Apache Solr warning on 6.2.1

2019-07-11 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-13620.
-
Resolution: Invalid

For this project, Jira is not a support portal, it is for bug reports.  This is 
not a bug.

When you opened this issue, there was bold red text indicating that problems 
should be discussed on the mailing list or IRC channel before opening an issue. 
 I am closing this issue as invalid.

That warning means commits are happening on your index too quickly.  The 
commits are happening WAY too quickly if you still get the warning with the 
maximum warming searchers raised to 16.  The fix is to reduce the frequency of 
your commits.  Such commits may be happening automatically according to Solr's 
configuration.

There is more info than you asked for on this blog post:  
https://lucidworks.com/post/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

If you need further assistance, please bring this issue up on the mailing list 
or IRC channel.


> Apache Solr warning on 6.2.1
> 
>
> Key: SOLR-13620
> URL: https://issues.apache.org/jira/browse/SOLR-13620
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - php
>Affects Versions: 6.2.1
> Environment: Cent OS 6.10, Apache Solr
>Reporter: Midas Nitin
>Priority: Major
>  Labels: performance
> Fix For: 6.2.1
>
>
> Hello,
> We have Apache Solr version 6.2.1 installed on server and we are getting this 
> warning on Apache Solr log from few days which has affected performance of 
> solr queries and put latency on our App:
> SolrCore [user_details] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> So we have followed this article 
> [https://support.datastax.com/hc/en-us/articles/207690673-FAQ-Solr-logging-PERFORMANCE-WARNING-Overlapping-onDeckSearchers-and-its-meaning]
>  and made changes in SolrConfig.xml file of user_details like this:
> 16
> and also we have reduced number of autowarmCount
>             class="solr.search.LRUCache"
>            size="10"
>            initialSize="0"
>            autowarmCount="5"
>            regenerator="solr.NoOpRegenerator" />
> however still we are getting this warning. Can you please help us how can we 
> improve the performance of solr queries on our app.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13618) Solr start-up fails to discover cores outside SOLR_HOME

2019-07-10 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16882043#comment-16882043
 ] 

Shawn Heisey commented on SOLR-13618:
-

We need to be careful when deciding just how much rope we will provide for 
users that they might use to hang themselves.

Why do you want your cores in entirely different directory trees?  This might 
seem like a silly question, but we do need to assess whether there are general 
use cases, or if this feature would be rarely used and better handled in some 
other way.

The functionality you want could be achieved right now without code changes by 
putting a symlink (junction on Windows) in the solr home pointing to your 
alternate storage location.

> Solr start-up fails to discover cores outside SOLR_HOME 
> 
>
> Key: SOLR-13618
> URL: https://issues.apache.org/jira/browse/SOLR-13618
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Talvinder Matharu
>Priority: Major
>
> When new 'instanceDir' is set outside the SOLR_HOME directory then core will 
> fail to reload after restart as the server only discovers cores within 
> SOLR_HOME, looking for a 'core.properties'.  
> So what we ideally want is to check for all the “core.properties” defined 
> within all 'instanceDir' paths, which may exist outside the SOLR_HOME dir.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13617) 8.x memory issue

2019-07-10 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16882031#comment-16882031
 ] 

Shawn Heisey commented on SOLR-13617:
-

What is the FULL error message?  Java exceptions are typically many lines long, 
could be dozens.

This should have been brought up on the mailing list before going to Jira.  
When you started the creation process for this issue, there was a note in bold 
red letters at the top of the creation screen stating that problems should be 
discussed before opening an issue.


> 8.x memory issue
> 
>
> Key: SOLR-13617
> URL: https://issues.apache.org/jira/browse/SOLR-13617
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 8.0, 8.1, 8.1.1
> Environment: {color:#00}32-bit Ubuntu 18.04 LTS using OpenJDK 
> 10.0.2.{color}
>Reporter: Scott Yeadon
>Priority: Critical
>
> {color:#00}I’m running Solr on 32-bit Ubuntu 18.04 LTS using OpenJDK 
> 10.0.2. Up until now I have had no problem with Solr (started running it 
> since 4.x), however after upgrading from 7.x to 8.x I am getting serious 
> memory issues.{color}
> {color:#00}I have a small repository of 30,000 documents currently using 
> Solr 7.1 for the search function. I attempted an upgrade to 8.1.1 and tried 
> to perform a full reindex however, it manages about 300 documents and then 
> dies from lack of memory (or so it says). I tried 8.1.0 with the same result. 
> I then tried 8.0.0 which did successfully manage a full reindex but then 
> after performing a couple of search queries died from lack of memory. I then 
> tried 7.7.2 which worked fine. I have now gone back to my original 7.1 as I 
> can’t risk 8.x in my production system.{color}
> {color:#00}I increased Xmx to 1024m (previously 512m) but that made no 
> difference, it may be some other resource than memory, but if it is, it isn’t 
> saying so, and it’s such a small repository it doesn’t seem to make sense to 
> be running out of memory, and 7.x runs fine
> {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5266) HttpSolrServer: baseURL set by constructor and setter has not the same result

2019-06-26 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16873561#comment-16873561
 ] 

Shawn Heisey commented on SOLR-5266:


Check to see if there's an actual problem in current implementations like 
HttpSolrClient.

Based on the description, I do not think it's something to worry about -- 
removing a trailing slash sounds proper to me, so the value returned by the 
client may not precisely match the input.


> HttpSolrServer: baseURL set by constructor and setter has not the same result
> -
>
> Key: SOLR-5266
> URL: https://issues.apache.org/jira/browse/SOLR-5266
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.4
>Reporter: Michael Stieler
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There is an unexpected difference between setting the base URL by the 
> constructor
> solrCore = new HttpSolrServer("http://localhost:8983/solr/";);
> and by the setter
> solrCore.setBaseURL("http://localhost:8983/solr/";);
> In the constructor, additional checks are performed such as removing the 
> trailing slash.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5266) HttpSolrServer: baseURL set by constructor and setter has not the same result

2019-06-26 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16873522#comment-16873522
 ] 

Shawn Heisey commented on SOLR-5266:


HttpSolrServer was renamed to HttpSolrClient in 5.0.  In 6.0, in accordance 
with standard project deprecation standards, it was removed. This happened with 
all the SolrServer implementations except EmbeddedSolrServer -- since that 
actually IS a server.

> HttpSolrServer: baseURL set by constructor and setter has not the same result
> -
>
> Key: SOLR-5266
> URL: https://issues.apache.org/jira/browse/SOLR-5266
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.4
>Reporter: Michael Stieler
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There is an unexpected difference between setting the base URL by the 
> constructor
> solrCore = new HttpSolrServer("http://localhost:8983/solr/";);
> and by the setter
> solrCore.setBaseURL("http://localhost:8983/solr/";);
> In the constructor, additional checks are performed such as removing the 
> trailing slash.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13537) Build Status Badge in git README

2019-06-21 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16870012#comment-16870012
 ] 

Shawn Heisey commented on SOLR-13537:
-

What would be really awesome would be a badge that reflects the build status of 
the actual repository.  As in ... somebody forks the repo, commits changes to 
their fork, and the badge actually shows whether or not their repository 
compiles.  Total pipe dream, probably not really possible.


> Build Status Badge in git README
> 
>
> Key: SOLR-13537
> URL: https://issues.apache.org/jira/browse/SOLR-13537
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build, documentation
>Affects Versions: master (9.0), 8.2
>Reporter: Marcus Eagan
>Priority: Trivial
> Attachments: Markdown Preview Of Build Status README.png, Simple 
> Artifact Build Badge.png, Simple Artifact Build Badges.png, Single Line 
> Badges.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In order to aid developers and DevOps engineers who are working in a 
> git-driven ecosystem, it would be helpful to see the status builds in the 
> README. This is a standard for many open source projects. I think one could 
> debate whether we should have a multi-line build badge visual in the README 
> because people need to know about the builds for various versions and 
> platforms in the case of Lucene/Solr because it is such a large and widely 
> used project, in a variety of environments. The badges not only celebrate 
> that fact, they support its persistence in the future with new developers who 
> look for such information instictively.
> I would recommend the active build pipelines (currently 8.x and 9.x) for each 
> platform, Linux, Windows, MacOSX, and Solaris.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13537) Build Status Badge in git README

2019-06-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16864826#comment-16864826
 ] 

Shawn Heisey commented on SOLR-13537:
-

Does the badge reflect the build only or the build+tests?

I don't think it would be all that useful with the build only, and with the 
current state of Solr tests, build+tests would be red a large percentage of the 
time.  That could cause a reputation problem for the project ... although it 
could be argued that's a plus, as it might be a strong motivation to make 
things better.  The idea proposed by [~janhoy] where we report on a build 
running a smaller subset of tests (at least until Solr's full integration tests 
improve) sounds good.  I think that's the only way this would be helpful right 
now.

We have people diligently working on improving the Solr tests so they don't 
fail constantly, but it's a huge task, and isn't going to be completed quickly. 
 I wish I had enough free time available to help with that.

Unless I'm mistaken, I think the badge would only show up on github.  Since 
github is not the canonical source repository, and as far as I can tell the 
readme is not rendered by Apache gitbox, I wonder how often our committers 
would actually see it.  As Jan says, the information is not all that useful to 
anyone who is not working in the code, and because of the state of Solr tests, 
I have doubts that it would be useful to those of us who are working in the 
code.

Down the road, I do think it's a great idea, but right now, I'm not sure it is. 
 Consider this a -0 vote.

Side note: This probably belongs in LUCENE, not SOLR.


> Build Status Badge in git README
> 
>
> Key: SOLR-13537
> URL: https://issues.apache.org/jira/browse/SOLR-13537
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build, documentation
>Affects Versions: master (9.0), 8.2
>Reporter: Marcus Eagan
>Priority: Trivial
> Attachments: Markdown Preview Of Build Status README.png, Simple 
> Artifact Build Badge.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In order to aid developers and DevOps engineers who are working in a 
> git-driven ecosystem, it would be helpful to see the status builds in the 
> README. This is a standard for many open source projects. I think one could 
> debate whether we should have a multi-line build badge visual in the README 
> because people need to know about the builds for various versions and 
> platforms in the case of Lucene/Solr because it is such a large and widely 
> used project, in a variety of environments. The badges not only celebrate 
> that fact, they support its persistence in the future with new developers who 
> look for such information instictively.
> I would recommend the active build pipelines (currently 8.x and 9.x) for each 
> platform, Linux, Windows, MacOSX, and Solaris.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13452) Update the lucene-solr build from Ivy+Ant+Maven (shadow build) to Gradle.

2019-05-31 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16853374#comment-16853374
 ] 

Shawn Heisey commented on SOLR-13452:
-

On the transitive question:

I think that transitive with explicit excludes is the right way to go -- as 
long as something unexpected will fail precommit, probably due to a missing 
checksum.

Does the precommit check notice when there's a checksum but no matching jar?  
Which could happen if an outside project notices they don't need a transitive 
dependency and they remove it.


> Update the lucene-solr build from Ivy+Ant+Maven (shadow build) to Gradle.
> -
>
> Key: SOLR-13452
> URL: https://issues.apache.org/jira/browse/SOLR-13452
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: master (9.0)
>
>
> I took some things from the great work that Dat did in 
> [https://github.com/apache/lucene-solr/tree/jira/gradle] and took the ball a 
> little further.
>  
> When working with gradle in sub modules directly, I recommend 
> [https://github.com/dougborg/gdub]
> This gradle branch uses the following plugin for version locking, version 
> configuration and version consistency across modules: 
> [https://github.com/palantir/gradle-consistent-versions]
>  
>  https://github.com/apache/lucene-solr/tree/jira/SOLR-13452_gradle_2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8807) Change all download URLs in build files to HTTPS

2019-05-31 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16853203#comment-16853203
 ] 

Shawn Heisey commented on LUCENE-8807:
--

bq, I also checked the new JAR checksums after the change with "ant 
jar-checksums" after refreshing everything. No SHA1 changes.

I was catching up on my jira spam, and that caught my eye, sparked a thought.

I wonder if we should update our hashing algorithm for these checksums, perhaps 
to sha-512, since sha1 was proven to be vulnerable.  That would need to spin 
off into a new issue.

> Change all download URLs in build files to HTTPS
> 
>
> Key: LUCENE-8807
> URL: https://issues.apache.org/jira/browse/LUCENE-8807
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 8.1
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
> Attachments: LUCENE-8807.patch, LUCENE-8807.patch
>
>
> At least for Lucene this is not a security issue, because we have checksums 
> for all downloaded JAR dependencies:
> {quote}
> [...] Projects like Lucene do checksum whitelists of
> all their build dependencies, and you may wish to consider that as a
> protection against threats beyond just MITM [...]
> {quote}
> This patch fixes the URLs for most files referenced in {{\*build.xml}} and 
> {{\*ivy\*.xml}} to HTTPS. There are a few data files in benchmark which use 
> HTTP only, but that's uncritical and I added a TODO. Some were broken already.
> I removed the "uk.maven.org" workarounds for Maven, as this does not work 
> with HTTPS. By keeping those inside, we break the whole chain of trust, as 
> any non-working HTTPS would fallback to the insecure uk.maven.org Maven 
> mirror.
> As the great chinese firewall is changing all the time, we should just wait 
> for somebody complaining.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8346) Upgrade Zookeeper to version 3.5.5

2019-05-30 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16851956#comment-16851956
 ] 

Shawn Heisey commented on SOLR-8346:


bq. Is there a way to do authenticated/encrypted access to 4lw?

This part I don't know, but I would imagine that if you require those things 
for ZK in general, it would apply for 4lw as well.  Something to ask the ZK 
project.

bq. Will new solr be allowed to run with ZK 3.4.x or do we plan to have dual 
support?

The way I read the general ZK guarantees tells me that any version should work 
properly with the previous minor release and the next minor release.  So if we 
release Solr with the 3.5.x client, it should work with ZK servers running 
anything from 3.4.0 through 3.6.X, unless there's a bug that breaks 
compatibility.  If they skip 3.6 and release 4.0 instead, then I think that 
only the last 3.5.x version can be guaranteed to work with the new major 
version ... but slow moving APIs could have a reality that's better than the 
guarantee.

Here's their published information about version compatibility - I got this 
from their mailing list:

https://cwiki.apache.org/confluence/display/ZOOKEEPER/ReleaseManagement


> Upgrade Zookeeper to version 3.5.5
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
>Priority: Major
>  Labels: security, zookeeper
> Attachments: SOLR-8346.patch, SOLR-8346.patch, SOLR-8346.patch, 
> SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. --Currently a 3.5.4-beta is released (2018-05-17).-- 
> Version 3.5.5 was released 2019-05-20



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13492) Disallow explicit GC by default during Solr startup

2019-05-26 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16848491#comment-16848491
 ] 

Shawn Heisey commented on SOLR-13492:
-

This came about while thinking about LUCENE-8814.

> Disallow explicit GC by default during Solr startup
> ---
>
> Key: SOLR-13492
> URL: https://issues.apache.org/jira/browse/SOLR-13492
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Major
>
> Solr should use the -XX:+DisableExplicitGC option as part of its default GC 
> tuning.
> None of Solr's stock code uses explicit GCs, so that option will have no 
> effect on most installs.  The effective result of this is that if somebody 
> adds custom code to Solr and THAT code does an explicit GC, it won't be 
> allowed to function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13492) Disallow explicit GC by default during Solr startup

2019-05-26 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-13492:
---

 Summary: Disallow explicit GC by default during Solr startup
 Key: SOLR-13492
 URL: https://issues.apache.org/jira/browse/SOLR-13492
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Reporter: Shawn Heisey
Assignee: Shawn Heisey


Solr should use the -XX:+DisableExplicitGC option as part of its default GC 
tuning.

None of Solr's stock code uses explicit GCs, so that option will have no effect 
on most installs.  The effective result of this is that if somebody adds custom 
code to Solr and THAT code does an explicit GC, it won't be allowed to function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8814) Use forbiddenapis to block explicit GCs

2019-05-26 Thread Shawn Heisey (JIRA)
Shawn Heisey created LUCENE-8814:


 Summary: Use forbiddenapis to block explicit GCs
 Key: LUCENE-8814
 URL: https://issues.apache.org/jira/browse/LUCENE-8814
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Shawn Heisey
Assignee: Shawn Heisey


Explicit GCs should not be allowed in production code.

We do have a number of System.gc() calls in Lucene and Solr tests, and in the 
Lucene benchmark.

I intend to block System.gc() and Runtime.gc() in the forbidden APIs 
configuration.  As part of that, I need to know whether the explicit calls in 
the test and benchmark code should be left there and allowed with 
SuppressForbidden, or if we should remove them.  My general thought is that 
production code will most likely not be doing explicit GCs, so tests shouldn't 
be doing it either ... but I didn't design those tests, so I can't claim to 
know the intent.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13457) Managing Timeout values in Solr

2019-05-10 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16837559#comment-16837559
 ] 

Shawn Heisey commented on SOLR-13457:
-

I've been putting some thought (but unfortunately not any actual *time*) into 
an overhaul of Solr's configuration system as a whole.

It would be awesome if we could have a sane way of setting config at various 
levels - node and core would be the most obvious.  We would need to decide 
whether node config would include SolrJ settings, or if that needs its own 
level ... would we want different cores/collections to be able to have 
different settings?  It would be important to make sure inheritance works 
properly, and to only exclude things from one or more levels when it really 
makes no sense for it to be there.

In cloud mode, the addition of ZK means there can also be cluster-level config 
and collection-level config (distinct from core-level) and if we want to get 
really fancy with Solr existing as two applications, some of the node-level 
config might be in ZK too.  There are also things like the DIH properties file 
that would be nice to bring into a single configuration umbrella.

Some of my ideas for this have been mentioned on SOLR-6733 and SOLR-6734.

> Managing Timeout values in Solr
> ---
>
> Key: SOLR-13457
> URL: https://issues.apache.org/jira/browse/SOLR-13457
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Priority: Major
>
> Presently, Solr has a variety of timeouts for various connections or 
> operations. These timeouts have been added, tweaked and refined and in some 
> cases made configurable in an ad-hoc manner by the contributors of individual 
> features. Throughout the history of the project. This is all well and good 
> until one experiences a timeout during an otherwise valid use case and needs 
> to adjust it.
> This has also made managing timeouts in unit tests "interesting" as noted in 
> SOLR-13389.
> Probably nobody has the spare time to do a tour de force through the code and 
> coordinate every single timeout, so in this ticket I'd like to establish a 
> framework for categorizing time outs, a standard for how we make each 
> category configurable, and then add sub-tickets to address individual 
> timeouts.
> The intention is that eventually, there will be no "magic number" timeout 
> values in code, and one can predict where to find the configuration for a 
> timeout by determining it's category.
> Initial strawman categories (feel free to knock down or suggest alternatives):
>  # *Feature-Instance Timeout*: Timeouts that relate to a particular 
> instantiation of a feature, for example a database connection timeout for a 
> connection to a particular database by DIH. These should be set in the 
> configuration of that instance.
>  # *Optional Feature Timeout*: A timeout that only has meaning in the context 
> of a particular feature that is not required for solr to function... i.e. 
> something that can be turned on or off. Perhaps a timeout for communication 
> with an external ldap for authentication purposes. These should be configured 
> in the same configuration that enables this feature.
>  # *Global System Timeout*: A timeout that will always be an active part of 
> Solr these should be configured in a new  section of solr.xml. For 
> example the Jetty thread idle timeout, or the default timeout for http calls 
> between nodes.
>  # *Node Specific Timeout*: A timeout which may differ on different nodes. I 
> don't know of any of these, but I'll grant the possibility. These (and only 
> these) should be set by setting system properties. If we don't have any of 
> these, that's just fine :).
>  # *Client Timeout*: These are timeouts in solrj code that are active in code 
> running outside the server. They should be configurable via java api, and via 
> a config file of some sort from a single location defined in a sysprop or 
> sourced from classpath (in that order). When run on the server, the solrj 
> code should look for a *Global System Timeout* setting before consulting 
> sysprops or classpath.
> *Note that in no case is a hard-coded value the correct solution.*
> If we get a consensus on categories and their locations, then the next step 
> is to begin adding sub tickets to bring specific timeouts into compliance. 
> Every such ticket should include an update to the section of the ref guide 
> documenting the configuration to which the timeout has been added (e.g. docs 
> for solr.xml for Global System Timeouts) describing what exactly is affected 
> by the timeout, the maximum allowed value and how zero and negative numbers 
> are handled.
> It is of course true that some of these

[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-05-09 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836430#comment-16836430
 ] 

Shawn Heisey commented on SOLR-13394:
-

Regarding the -XX:+PerfDisableSharedMem parameter:  This is another case of 
things from my GC experiments creeping into other places.

On systems with lots of disk writes, that parameter can lead to real 
performance gains.

It does stop a lot of commandline Java tools from working, though ... because 
those tools gather their information from the target JVM through the shared 
memory interface and don't have any other way to do it.  A prime example is 
jstat.

My GC tuning wiki page at [https://wiki.apache.org/solr/ShawnHeisey] references 
a fascinating blog post: [http://www.evanjones.ca/jvm-mmap-pause.html]

For the general case, I think that parameter can really help performance ... 
but some users will be seriously hampered by the lack of working Java 
commandline tools.  Whether or not we leave it in by default, some 
documentation is a good idea.

> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1
>
> Attachments: SOLR-13394.patch, SOLR-13394.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13446) Improve default heap size and related handling

2019-05-07 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835012#comment-16835012
 ] 

Shawn Heisey commented on SOLR-13446:
-

Thanks, for your opinion, [~gerlowskija]!

I'm a little torn on changing the default also.  But if it's coded well, I 
think letting Java set the heap size would be a good option.  And anytime Java 
does choose the heap size, we need a log entry informing the user that it 
wasn't explicitly set -- because I would not want to have somebody go into 
production that way without really knowing that's what they're doing.

Even if we stick with a hardcoded default, I absolutely want to have something 
in the log as discussed, with a configurable threshold, and we'll need to 
decide where to set that threshold default.  As mentioned, I think 2GB is a 
good starting point.  And we probably ought to log something short, one line, 
at INFO if they're above the threshold.

Here's more detail to the nuts and bolts I was thinking about:  Write a tiny 
java program that can be executed by the start script, which writes a shell (or 
cmd) include file in a temp directory with data that the start script can load 
before starting Solr.  I'm thinking that we call this little program 
solr-agent, and it can be a precursor to the ideas in SOLR-6733 and SOLR-6734.

I need to do some experiments to determine whether or not Java and our start 
scripts can detect when there is not enough actual memory to satisfy the heap 
allocation.

> Improve default heap size and related handling
> --
>
> Key: SOLR-13446
> URL: https://issues.apache.org/jira/browse/SOLR-13446
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 8.0
>Reporter: Shawn Heisey
>Priority: Minor
>
> Solr's scripts have a default max heap setting of 512MB.  I think it's fair 
> to say that for a production install, this is ridiculously small.  Nearly 
> everyone who runs a Solr server will need to increase this value.
> I think it would be useful to issue a warning in the log when the heap size 
> is below a certain value.  Text like "Detected max heap size is .  It 
> might be necessary to increase the heap size for proper operation.  See 
> https://lucene.apache.org/solr/path/to/ref/guide/location for details."
> For people who are running very small servers, there should be a config 
> option to turn off that logging when somebody knows that the default heap 
> size is perfectly fine for their setup.
> At the same time, we also need to improve the default heap size.  I'm going 
> to ask everyone to bikeshed about what the new default should be.  Initial 
> thought is a 2GB default, to be made smaller automatically if detected system 
> memory is low.  If the admin has explicitly set the heap size, then none of 
> this will take place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13446) Improve default heap size and related handling

2019-05-03 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832929#comment-16832929
 ] 

Shawn Heisey commented on SOLR-13446:
-

{quote}Just let the JVM do it's thing?
{quote}
I'm ambivalent about this idea.

On the negative side, I think that doing this would contribute to the general 
misconception people have that "Java is a memory hog." Although I do not agree 
with that perception, I'm not going to try and disprove it here.

On the plus side, there's near zero chance that the heap will be too large.  It 
could end up being too small!

Incorporating your idea, here's a more fleshed out proposal:
 * If explicit memory options are such that the max heap is above the 
threshold, or the admin has configured the option to disable logging, then 
nothing will be logged. We could consider trying to detect when the heap size 
is way too big for the available memory in the server, but I'm not sure we need 
to do that.
 * If no memory options are given, and/or the max heap size is below a certain 
threshold, we log one or two warnings, with text like the following:
 ** "The heap size was not restricted at startup. Java has chosen a max heap 
size of . If this is not acceptable, please configure Solr to use a 
specific heap size. See [https://blahblah|https://blahblah/] for details."
 ** "Solr is running with a heap size of . This is a small heap, which 
might cause operational or performance problems. See 
[https://blahblah|https://blahblah/] for details."

 

> Improve default heap size and related handling
> --
>
> Key: SOLR-13446
> URL: https://issues.apache.org/jira/browse/SOLR-13446
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 8.0
>Reporter: Shawn Heisey
>Priority: Minor
>
> Solr's scripts have a default max heap setting of 512MB.  I think it's fair 
> to say that for a production install, this is ridiculously small.  Nearly 
> everyone who runs a Solr server will need to increase this value.
> I think it would be useful to issue a warning in the log when the heap size 
> is below a certain value.  Text like "Detected max heap size is .  It 
> might be necessary to increase the heap size for proper operation.  See 
> https://lucene.apache.org/solr/path/to/ref/guide/location for details."
> For people who are running very small servers, there should be a config 
> option to turn off that logging when somebody knows that the default heap 
> size is perfectly fine for their setup.
> At the same time, we also need to improve the default heap size.  I'm going 
> to ask everyone to bikeshed about what the new default should be.  Initial 
> thought is a 2GB default, to be made smaller automatically if detected system 
> memory is low.  If the admin has explicitly set the heap size, then none of 
> this will take place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13446) Improve default heap size and related handling

2019-05-03 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832786#comment-16832786
 ] 

Shawn Heisey commented on SOLR-13446:
-

For my ideas, I will need somebody to explain how to create a standalone module 
within the Solr project, one that will make an executable jar.  That 
information will be helpful for SOLR-6733 as well.

> Improve default heap size and related handling
> --
>
> Key: SOLR-13446
> URL: https://issues.apache.org/jira/browse/SOLR-13446
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 8.0
>Reporter: Shawn Heisey
>Priority: Minor
>
> Solr's scripts have a default max heap setting of 512MB.  I think it's fair 
> to say that for a production install, this is ridiculously small.  Nearly 
> everyone who runs a Solr server will need to increase this value.
> I think it would be useful to issue a warning in the log when the heap size 
> is below a certain value.  Text like "Detected max heap size is .  It 
> might be necessary to increase the heap size for proper operation.  See 
> https://lucene.apache.org/solr/path/to/ref/guide/location for details."
> For people who are running very small servers, there should be a config 
> option to turn off that logging when somebody knows that the default heap 
> size is perfectly fine for their setup.
> At the same time, we also need to improve the default heap size.  I'm going 
> to ask everyone to bikeshed about what the new default should be.  Initial 
> thought is a 2GB default, to be made smaller automatically if detected system 
> memory is low.  If the admin has explicitly set the heap size, then none of 
> this will take place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13446) Improve default heap size and related handling

2019-05-03 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832782#comment-16832782
 ] 

Shawn Heisey commented on SOLR-13446:
-

I'm hoping that I can find some time to work on this issue.  I have some ideas.

> Improve default heap size and related handling
> --
>
> Key: SOLR-13446
> URL: https://issues.apache.org/jira/browse/SOLR-13446
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 8.0
>Reporter: Shawn Heisey
>Priority: Minor
>
> Solr's scripts have a default max heap setting of 512MB.  I think it's fair 
> to say that for a production install, this is ridiculously small.  Nearly 
> everyone who runs a Solr server will need to increase this value.
> I think it would be useful to issue a warning in the log when the heap size 
> is below a certain value.  Text like "Detected max heap size is .  It 
> might be necessary to increase the heap size for proper operation.  See 
> https://lucene.apache.org/solr/path/to/ref/guide/location for details."
> For people who are running very small servers, there should be a config 
> option to turn off that logging when somebody knows that the default heap 
> size is perfectly fine for their setup.
> At the same time, we also need to improve the default heap size.  I'm going 
> to ask everyone to bikeshed about what the new default should be.  Initial 
> thought is a 2GB default, to be made smaller automatically if detected system 
> memory is low.  If the admin has explicitly set the heap size, then none of 
> this will take place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13446) Improve default heap size and related handling

2019-05-03 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-13446:
---

 Summary: Improve default heap size and related handling
 Key: SOLR-13446
 URL: https://issues.apache.org/jira/browse/SOLR-13446
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Affects Versions: 8.0
Reporter: Shawn Heisey


Solr's scripts have a default max heap setting of 512MB.  I think it's fair to 
say that for a production install, this is ridiculously small.  Nearly everyone 
who runs a Solr server will need to increase this value.

I think it would be useful to issue a warning in the log when the heap size is 
below a certain value.  Text like "Detected max heap size is .  It might be 
necessary to increase the heap size for proper operation.  See 
https://lucene.apache.org/solr/path/to/ref/guide/location for details."

For people who are running very small servers, there should be a config option 
to turn off that logging when somebody knows that the default heap size is 
perfectly fine for their setup.

At the same time, we also need to improve the default heap size.  I'm going to 
ask everyone to bikeshed about what the new default should be.  Initial thought 
is a 2GB default, to be made smaller automatically if detected system memory is 
low.  If the admin has explicitly set the heap size, then none of this will 
take place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-04-29 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16829606#comment-16829606
 ] 

Shawn Heisey commented on SOLR-13394:
-

bq. The default region size is very large at 16MB. G1GC recommends ~2048 
regions which would make this setting appropriate only for 32GB heaps or more. 
Can we remove this explicit setting and let G1 choose the default.

I would be curious about whether the latest Java 8 does what Oracle engineers 
promised with regard to better handling of humongous allocations.

An index with 125 million documents will have filterCache entries that are 
about 15MB each.  In order for those to not be considered humongous 
allocations, the region size must be set to its maximum -- 32MB.

I wonder if we need some minimal documentation about GC tuning, that at least 
mentions the region size parameter.

> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1
>
> Attachments: SOLR-13394.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9769) solr stop on a service already stopped should return exit code 0

2019-04-24 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825680#comment-16825680
 ] 

Shawn Heisey commented on SOLR-9769:


I see your point, and offer the following:

If Solr is already stopped, and you try to stop it again, that is actually an 
error condition.  The script cannot complete the requested action ... so one 
way of interpreting that is an error ... though some would say that since the 
service is in fact stopped, it's successful.  I think it should be reported as 
an error.

Perhaps what should happen here is the exit code should be 1 if Solr is already 
stopped, and 2 or higher if there's something that could be classified as more 
of a "real" problem.

As a workaround until we decide exactly what to do about this error report, you 
should investigate whether the "do something" part of your script can be done 
while Solr is running, and use "/etc/init.d/solr restart" instead after it's 
done.  Because most unix and unix-like platforms allow you to delete files that 
are currently held open, there's a good chance that whatever you want to do can 
be done while Solr is running.  I cannot guarantee this, of course.  If we find 
that the restart action doesn't work when the service is already stopped, I 
think that qualifies as a bug.


> solr stop on a service already stopped should return exit code 0
> 
>
> Key: SOLR-9769
> URL: https://issues.apache.org/jira/browse/SOLR-9769
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.3
>Reporter: Jiří Pejchal
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> According to the LSB specification
> https://refspecs.linuxfoundation.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic.html#INISCRPTACT
>  running stop on a service already stopped or not running should be 
> considered successful and return code should be 0 (zero).
> Solr currently returns exit code 1:
> {code}
> $ /etc/init.d/solr stop; echo $?
> Sending stop command to Solr running on port 8983 ... waiting up to 180 
> seconds to allow Jetty process 4277 to stop gracefully.
> 0
> $ /etc/init.d/solr stop; echo $?
> No process found for Solr node running on port 8983
> 1
> {code}
> {code:title="bin/solr"}
> if [ "$SOLR_PID" != "" ]; then
> stop_solr "$SOLR_SERVER_DIR" "$SOLR_PORT" "$STOP_KEY" "$SOLR_PID"
>   else
> if [ "$SCRIPT_CMD" == "stop" ]; then
>   echo -e "No process found for Solr node running on port $SOLR_PORT"
>   exit 1
> fi
>   fi
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-24 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825580#comment-16825580
 ] 

Shawn Heisey commented on SOLR-13414:
-

bq. and old core was renamed

I don't think you can just rename the old jar.  It needs to be completely 
removed from WEB-INF/lib or Jetty/Java will probably still load it and use it.  
Moving it rather than deleting it would be a good idea, so it can be restored 
later.

Hopefully this note is actually helpful and not a wild goose chase.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handl

[jira] [Commented] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate

2019-04-21 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16822748#comment-16822748
 ] 

Shawn Heisey commented on SOLR-13396:
-

bq. Next step: solr starts, and from all the possible zookeepers it could 
connect to, it connected to the faulty one. And that caused the deletion.

ZK clients (including Solr) connect to *ALL* of the zookeepers that have been 
configured.  They don't connect to just one server unless they have only been 
configured with one server.  ZK should never be placed behind a load balancer.  
If Solr has been configured with multiple servers and it can only connect to 
one, that seems like something we should detect (if we can) and probably refuse 
to proceed with startup.

bq. I'd go even further and says: make it an option, default disabled, to shut 
down the solr in case this happens.

That's an interesting idea.  If we combine it with what I initially proposed, 
then there's a hybrid solution that I will describe here:

We create a new option to prevent Solr startup when there are cores that aren't 
referenced in ZK.  Initially, this option will default to disabled, but at some 
point (probably 9.0) we flip the default to enabled.

If the new option is enabled, then Solr will not complete startup when that 
situation is found.  The log will indicate why this has happened.  The 
"bootstrap" option will take priority over the new option if it is found.

If the new option is disabled, then here's what will happen:

Cores that do not exist in ZK will not start.  Solr will check for a file in 
the solr home, with a name like allow_auto_core_delete, and if that file 
exists, it will be deleted and then Solr will proceed as if another new option 
were enabled, and delete unreferenced cores.  The new option described here 
will default to false and the default will not change in a later release.


> SolrCloud will delete the core data for any core that is not referenced in 
> the clusterstate
> ---
>
> Key: SOLR-13396
> URL: https://issues.apache.org/jira/browse/SOLR-13396
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3.1, 8.0
>Reporter: Shawn Heisey
>Priority: Major
>
> SOLR-12066 is an improvement designed to delete core data for replicas that 
> were deleted while the node was down -- better cleanup.
> In practice, that change causes SolrCloud to delete all core data for cores 
> that are not referenced in the ZK clusterstate.  If all the ZK data gets 
> deleted or the Solr instance is pointed at a ZK ensemble with no data, it 
> will proceed to delete all of the cores in the solr home, with no possibility 
> of recovery.
> I do not think that Solr should ever delete core data unless an explicit 
> DELETE action has been made and the node is operational at the time of the 
> request.  If a core exists during startup that cannot be found in the ZK 
> clusterstate, it should be ignored (not started) and a helpful message should 
> be logged.  I think that message should probably be at WARN so that it shows 
> up in the admin UI logging tab with default settings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-04-18 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16821483#comment-16821483
 ] 

Shawn Heisey commented on SOLR-13394:
-

The reason that I used AggressiveOpts in my GC experiments is because it is 
mentioned by the documentation in words like these:  "Enabling options that are 
expected to be enabled by default in the next major release of Java."  I would 
not expect the Java developers to include anything in that option if its 
quality is questionable ... but I could be wrong!

If that option is deprecated, then there would be no reason to keep it.


> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13394.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12461) Upgrade Dropwizard Metrics to 4.0.5 release

2019-04-18 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16821378#comment-16821378
 ] 

Shawn Heisey commented on SOLR-12461:
-

I thought LGPL was OK with the Apache License, but I've checked and it's not.  
I wonder where I got that bad info from.

> Upgrade Dropwizard Metrics to 4.0.5 release
> ---
>
> Key: SOLR-12461
> URL: https://issues.apache.org/jira/browse/SOLR-12461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-12461.patch
>
>
> This version of the library contains several improvements and it's compatible 
> with Java 9. 
> However, starting from 4.0.0 metrics-ganglia is no longer available, which 
> means that if we upgrade we will have to remove the corresponding 
> {{SolrGangliaReporter}}.
> Such change is not back-compatible, so I see the following options:
> * wait with the upgrade until 8.0
> * upgrade and remove {{SolrGangliaReporter}} and describe this in the release 
> notes.
> Any other suggestions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate

2019-04-12 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816433#comment-16816433
 ] 

Shawn Heisey commented on SOLR-13396:
-

If I'm not mistaken, I think that delete operations happen through the 
overseer.  I'm guessing that we don't want operations that couldn't be handled 
to stick around in the overseer queue ... but maybe we could create a secondary 
queue for things (like deletes) that were never acknowledged, and the overseer 
can occasionally revisit those items to see if it's possible to complete them.

> SolrCloud will delete the core data for any core that is not referenced in 
> the clusterstate
> ---
>
> Key: SOLR-13396
> URL: https://issues.apache.org/jira/browse/SOLR-13396
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3.1, 8.0
>Reporter: Shawn Heisey
>Priority: Major
>
> SOLR-12066 is an improvement designed to delete core data for replicas that 
> were deleted while the node was down -- better cleanup.
> In practice, that change causes SolrCloud to delete all core data for cores 
> that are not referenced in the ZK clusterstate.  If all the ZK data gets 
> deleted or the Solr instance is pointed at a ZK ensemble with no data, it 
> will proceed to delete all of the cores in the solr home, with no possibility 
> of recovery.
> I do not think that Solr should ever delete core data unless an explicit 
> DELETE action has been made and the node is operational at the time of the 
> request.  If a core exists during startup that cannot be found in the ZK 
> clusterstate, it should be ignored (not started) and a helpful message should 
> be logged.  I think that message should probably be at WARN so that it shows 
> up in the admin UI logging tab with default settings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate

2019-04-11 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-13396:
---

 Summary: SolrCloud will delete the core data for any core that is 
not referenced in the clusterstate
 Key: SOLR-13396
 URL: https://issues.apache.org/jira/browse/SOLR-13396
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 8.0, 7.3.1
Reporter: Shawn Heisey


SOLR-12066 is an improvement designed to delete core data for replicas that 
were deleted while the node was down -- better cleanup.

In practice, that change causes SolrCloud to delete all core data for cores 
that are not referenced in the ZK clusterstate.  If all the ZK data gets 
deleted or the Solr instance is pointed at a ZK ensemble with no data, it will 
proceed to delete all of the cores in the solr home, with no possibility of 
recovery.

I do not think that Solr should ever delete core data unless an explicit DELETE 
action has been made and the node is operational at the time of the request.  
If a core exists during startup that cannot be found in the ZK clusterstate, it 
should be ignored (not started) and a helpful message should be logged.  I 
think that message should probably be at WARN so that it shows up in the admin 
UI logging tab with default settings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12809) Document recommended Java/Solr combinations (JDK 11?)

2019-03-25 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16800537#comment-16800537
 ] 

Shawn Heisey commented on SOLR-12809:
-

I was fiddling with my wife's laptop.  Noticed that it said Java needed an 
update.  Since I have the JDK on there, I went to update that too.

Downloading JDK 8u201, I noticed that it has the same license as Java 11, 
prohibiting production use without a separate license.

> Document recommended Java/Solr combinations (JDK 11?)
> -
>
> Key: SOLR-12809
> URL: https://issues.apache.org/jira/browse/SOLR-12809
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> JDK 8 will be EOL early next year (except for "premier support"). JDK 9, 10 
> and 11 all have issues for Solr and Lucene IIUC.
> Also IIUC Oracle will start requiring commercial licenses for 11.
> This Jira is to discuss what we want to do going forward. Among the topics:
>  * Skip straight to 11, skipping 9 and 10? If so how to resolve current 
> issues?
>  * How much emphasis on OpenJDK .vs. Oracle's version
>  * What to do about dependencies that don't work (for whatever reason) with 
> the version of Java we go with?
>  * ???
> This may turn into an umbrella Jira with sub-tasks of course. Since JDK 11 
> has had a GA release, I'd also like to have a record of where the current 
> issues are to refer people to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12809) Document recommended Java/Solr combinations (JDK 11?)

2019-03-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797374#comment-16797374
 ] 

Shawn Heisey commented on SOLR-12809:
-

bq. That does seem like a good idea. But I do think it might be prudent to 
point out that Oracle Java 11+ requires payment and OpenJDK 11+ doesn't.

Or at least mention that some license agreements have changed recently and 
licenses should be carefully reviewed when choosing a JVM.

> Document recommended Java/Solr combinations (JDK 11?)
> -
>
> Key: SOLR-12809
> URL: https://issues.apache.org/jira/browse/SOLR-12809
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> JDK 8 will be EOL early next year (except for "premier support"). JDK 9, 10 
> and 11 all have issues for Solr and Lucene IIUC.
> Also IIUC Oracle will start requiring commercial licenses for 11.
> This Jira is to discuss what we want to do going forward. Among the topics:
>  * Skip straight to 11, skipping 9 and 10? If so how to resolve current 
> issues?
>  * How much emphasis on OpenJDK .vs. Oracle's version
>  * What to do about dependencies that don't work (for whatever reason) with 
> the version of Java we go with?
>  * ???
> This may turn into an umbrella Jira with sub-tasks of course. Since JDK 11 
> has had a GA release, I'd also like to have a record of where the current 
> issues are to refer people to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12809) Document recommended Java/Solr combinations (JDK 11?)

2019-03-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797345#comment-16797345
 ] 

Shawn Heisey commented on SOLR-12809:
-

bq. I would rather we/Solr not "recommend" any particular JDK any more than 
other Java software should do so either.

That does seem like a good idea.  But I do think it might be prudent to point 
out that Oracle Java 11+ requires payment and OpenJDK 11+ doesn't.

> Document recommended Java/Solr combinations (JDK 11?)
> -
>
> Key: SOLR-12809
> URL: https://issues.apache.org/jira/browse/SOLR-12809
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> JDK 8 will be EOL early next year (except for "premier support"). JDK 9, 10 
> and 11 all have issues for Solr and Lucene IIUC.
> Also IIUC Oracle will start requiring commercial licenses for 11.
> This Jira is to discuss what we want to do going forward. Among the topics:
>  * Skip straight to 11, skipping 9 and 10? If so how to resolve current 
> issues?
>  * How much emphasis on OpenJDK .vs. Oracle's version
>  * What to do about dependencies that don't work (for whatever reason) with 
> the version of Java we go with?
>  * ???
> This may turn into an umbrella Jira with sub-tasks of course. Since JDK 11 
> has had a GA release, I'd also like to have a record of where the current 
> issues are to refer people to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9698) Fix bin/solr script calculations - start/stop wait time and RMI_PORT on Windows

2019-03-08 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788202#comment-16788202
 ] 

Shawn Heisey commented on SOLR-9698:


Oh, the patch just submitted has an echo (for debugging) that needs to be 
removed before it is committed.

> Fix bin/solr script calculations - start/stop wait time and RMI_PORT on 
> Windows
> ---
>
> Key: SOLR-9698
> URL: https://issues.apache.org/jira/browse/SOLR-9698
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Erick Erickson
>Priority: Major
> Attachments: SOLR-9698.patch
>
>
> Killing the Solr process after 5 seconds is too harsh. See the discussion at 
> SOLR-9371.
> SOLR-9371 fixes the *nix versions, we need to do something similar for 
> Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9698) Fix bin/solr script calculations - start/stop wait time and RMI_PORT on Windows

2019-03-08 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788174#comment-16788174
 ] 

Shawn Heisey commented on SOLR-9698:


Something that would be immensely useful for testing this sort of thing would 
be a URL endpoint that creates a thread deadlock within Solr, so that it will 
never gracefully stop.

> Fix bin/solr script calculations - start/stop wait time and RMI_PORT on 
> Windows
> ---
>
> Key: SOLR-9698
> URL: https://issues.apache.org/jira/browse/SOLR-9698
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Erick Erickson
>Priority: Major
>
> Killing the Solr process after 5 seconds is too harsh. See the discussion at 
> SOLR-9371.
> SOLR-9371 fixes the *nix versions, we need to do something similar for 
> Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-02 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16706480#comment-16706480
 ] 

Shawn Heisey commented on SOLR-13035:
-

My current opinion:  The solr.data.home parameter is an expert option.  if you 
want to change where Solr writes its data, just change the solr home 
(solr.solr.home property, -s option on the bin/solr script) -- both the core 
instanceDirs and the dataDirs will be under that location.

The only reason to define solr.data.home is to put instanceDir and dataDir 
definitions in different places without having to manually edit core.properties 
files to set dataDir.  Expert users might have reason to separate core configs 
from core data ... but I don't think typical users need it.  Also, that 
separation is readily achieved by running in SolrCloud mode.

To solve issues for the docker images, we need to have the ability for Solr to 
start when solr.xml is missing.  If we have that, then the docker image can 
mount an empty location for the solr home, and Solr will start.  Then the user 
has the option of adding a solr.xml file if they want to change config options 
controlled by that file.  


> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is also started in SolrCloud mode. It would be great if 
> all writable content can come under the same directory.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13026) Admin UI - dataimport status has green bar even when import fails

2018-11-30 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16704801#comment-16704801
 ] 

Shawn Heisey commented on SOLR-13026:
-

Some issues related to problems with computer-based parsing of the DIH status:

SOLR-2728
SOLR-2729
SOLR-3319
SOLR-3689
SOLR-4241


> Admin UI - dataimport status has green bar even when import fails
> -
>
> Key: SOLR-13026
> URL: https://issues.apache.org/jira/browse/SOLR-13026
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.5.0
> Environment: For screenshot, Solr 7.5.0 on Windows.
> The production setup is a patched 7.1.0 version on Linux.
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: DIH-failed-UI-green.png
>
>
> In the admin UI, the dataimport status screen is showing a green status bar 
> even when an import fails to run.  The error that occurred in attached 
> screenshot was a connection problem -- in this case the database didn't 
> exist.  I have seen this in production when a URL for SQL Server is 
> incorrect.  The raw status output clearly shows "Full import failed".
> I believe that the status should show in a different color, probably red.  
> There is an icon of a green check mark in the status also.  For those who are 
> color blind, that should change to an icon with an X in it for a visual 
> indicator not related to color.
> I am painfully aware of how terrible the DIH status output is.  It is great 
> for human readability, but extremely difficult for a computer to understand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13026) Admin UI - dataimport status has green bar even when import fails

2018-11-30 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-13026:
---

 Summary: Admin UI - dataimport status has green bar even when 
import fails
 Key: SOLR-13026
 URL: https://issues.apache.org/jira/browse/SOLR-13026
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Affects Versions: 7.5.0
 Environment: For screenshot, Solr 7.5.0 on Windows.
The production setup is a patched 7.1.0 version on Linux.

Reporter: Shawn Heisey
 Attachments: DIH-failed-UI-green.png

In the admin UI, the dataimport status screen is showing a green status bar 
even when an import fails to run.  The error that occurred in attached 
screenshot was a connection problem -- in this case the database didn't exist.  
I have seen this in production when a URL for SQL Server is incorrect.  The raw 
status output clearly shows "Full import failed".

I believe that the status should show in a different color, probably red.  
There is an icon of a green check mark in the status also.  For those who are 
color blind, that should change to an icon with an X in it for a visual 
indicator not related to color.

I am painfully aware of how terrible the DIH status output is.  It is great for 
human readability, but extremely difficult for a computer to understand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13008) JSON Document Transformer doesn't heed "indent" parameter

2018-11-27 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16701227#comment-16701227
 ] 

Shawn Heisey commented on SOLR-13008:
-

I had to go look at the docs to figure out what adding {{:[json]}} to a field 
in the fl param does.

The docs are not terribly clear on it, though.  The PDF I looked at (7.5) says 
that you get "the actual raw XML or JSON structure instead of just the string 
value."  But exactly what that means is not really clear to me, and there's no 
output shown for clarification of what it actually does.

The premise sounds reasonable, but my understanding of the code and the impact 
of the suggested change is pretty low.

Minor note:  The patch would fail precommit, because there's no checksum file 
for the added jar.  If I remember right, that's added with "ant jar-checksums".

> JSON Document Transformer doesn't heed "indent" parameter
> -
>
> Key: SOLR-13008
> URL: https://issues.apache.org/jira/browse/SOLR-13008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.5
>Reporter: Eric Pugh
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> A query like "wt=json&fl=id,subject:[json]&indent=true", will return the 
> field subject as JSON, however the indent parameter is ignored on the nested 
> JSON.   This can lead to a very wide response that you have to scroll over.  
> Often I add indent=true becasue I want to see the structure of my embedded 
> JSON text, and I need to cut'n'paste over to another editor.   This would let 
> the nested JSON object be pretty printed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13018) In solr-cloud mode, It throws an error when i create a collection with schema that has fieldType containing openNLP tokenizer and filters

2018-11-27 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-13018.
-
   Resolution: Invalid
Fix Version/s: (was: 7.3.2)

For this project, Jira is not a support portal.  When you created this issue, 
there was a note in prominent red letters saying "This project has a user 
mailing list and an IRC channel for support. Please ensure that you have 
discussed your problem using one of those resources BEFORE creating this 
ticket."

Based on everything you've said, this is not sounding like a bug.  I believe 
that there's something you're not doing correctly.

Some things to note when you reach one of those resources:

bq. Caused by: Can't find resource 'solrconfig.xml'

It's saying that it can't find the solrconfig.xml file in the config that's in 
zookeeper.  At a minimum, your configset must include a solrconfig.xml file and 
a file for your schema as well.  Most likely the schema file will need to be 
named "managed-schema" with no extension.

bq. ERROR: Error uploading file 
/opt/solr/server/solr/configsets/xyz/conf/en-pos-maxent.bin to zookeeper path 
/configs/xyz/en-pos-maxent.bin

There will be a LOT more to this error, it could be several dozen lines long.  
If I had to guess, the problem is likely that the file in question is larger 
than the maximum node size enforced by zookeeper, which defaults to slightly 
less than one megabyte.

If after discussion on the mailing list or IRC channel it is determined that 
Solr actually does have a bug, then the problem can be explored in Jira.


> In solr-cloud mode, It throws an error when i create a collection with schema 
> that has fieldType containing openNLP tokenizer and filters
> -
>
> Key: SOLR-13018
> URL: https://issues.apache.org/jira/browse/SOLR-13018
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 7.3.1
>Reporter: Parmeshwor Thapa
>Priority: Major
>
> Here is schema for field:
> {code:java}
> 
>   
>      tokenizerModel="en-token.bin"        sentenceModel="en-sent.bin"/>
>     
>      posTaggerModel="en-pos-maxent.bin"/>
>      dictionary="en-lemmatizer.txt"/>
> 
>     
>     
>   
> 
> {code}
> I have configset with all the files(en-token.bin, en-sent.bin, ...) in same 
> directory. Using that configset i can successfully create Solr Core in 
> Standalone mode.
> But With Solr cloud (two instances in separate servers orchestrated by  
> zookeeper) i have the same configset in both servers and i try to create  a  
> collection, it is throwing me an error which doesn't make any sense to me.
> {code:java}
>  $ bin/solr create -p 8984 -c  xyz -n xyz_conf -d xyz_conf
> ... ERROR: Failed to create collection 'xyz' due to: 
> {example1.com:8984_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at https://example2.com:8984/solr: Error CREATEing SolrCore 
> 'xyz_shard1_replica_n1': Unable to create core [xyz_shard1_replica_n1] Caused 
> by: Can't find resource 'solrconfig.xml' in classpath or '/configs/xyz', 
> cwd=/opt/solr-7.3.1/server}
> {code}
>  
>   
> Note: uploading configset to zookeeper also fails with error
> {code:java}
> $ bin/solr create -c xyz  -n xyz_conf -d xyz_conf
> ...
> —
> ERROR: Error uploading file 
> /opt/solr/server/solr/configsets/xyz/conf/en-pos-maxent.bin to zookeeper path 
> /configs/xyz/en-pos-maxent.bin
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694230#comment-16694230
 ] 

Shawn Heisey commented on SOLR-13003:
-

Ouch.  The jar is five megabytes!

To use it, you'll need to stop Solr, delete the existing solr-core-7.3.1.jar 
file from server/solr-webapp/webapp/WEB-INF/lib and copy this new jar in its 
place, then start Solr back up.  Before starting Solr again, you will also need 
to edit your log4j.properties file to increase the rollover size for the 
logfile from 4MB to something much larger.  I expect there to be a LOT of data 
logged.

Then you'll have to do the steps that result in the cache getting much larger 
than the defined size.

> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: CLRU-logging.patch, lrucacheexpanded.png, 
> lrucachemaxmb.png, solr-core-7.3.1-SNAPSHOT.jar, solrconfig.xml
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694234#comment-16694234
 ] 

Shawn Heisey commented on SOLR-13003:
-

OK, I was wrong.  I thought I had made an error that would result in things 
being logged for caches that were not defined with the max ram size, but after 
checking the code more carefully, I have concluded that this will not be a 
problem.  So the attached files are good.

> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: CLRU-logging.patch, lrucacheexpanded.png, 
> lrucachemaxmb.png, solr-core-7.3.1-SNAPSHOT.jar, solrconfig.xml
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694231#comment-16694231
 ] 

Shawn Heisey commented on SOLR-13003:
-

I just realized that I have made a mistake in the patch.  I will update the 
patch and the jar.  Wait until I get that done before trying it.

> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: CLRU-logging.patch, lrucacheexpanded.png, 
> lrucachemaxmb.png, solr-core-7.3.1-SNAPSHOT.jar, solrconfig.xml
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-13003:

Attachment: solr-core-7.3.1-SNAPSHOT.jar

> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: CLRU-logging.patch, lrucacheexpanded.png, 
> lrucachemaxmb.png, solr-core-7.3.1-SNAPSHOT.jar, solrconfig.xml
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-13003:

Attachment: CLRU-logging.patch

> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: CLRU-logging.patch, lrucacheexpanded.png, 
> lrucachemaxmb.png, solrconfig.xml
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694228#comment-16694228
 ] 

Shawn Heisey commented on SOLR-13003:
-

Attached a patch for adding logging.  Will attach a custom jar when Solr 
finishes compiling.

[~cetra3], you're in the best position to test this.  I no longer have access 
to Solr servers that I can change at will, so it's difficult for me to do this.

I would suggest that when testing this, you only have one of Solr's caches set 
up with maxRamMB, define all the others with the usual "size" parameters.  If 
you could provide a file with a typical search response from Solr so we can see 
how much data a response contains, that would be helpful.  If you choose to do 
testing with the filterCache, we will need to know how many docs (maxDoc, not 
numDoc) the core contains.


> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: CLRU-logging.patch, lrucacheexpanded.png, 
> lrucachemaxmb.png, solrconfig.xml
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694192#comment-16694192
 ] 

Shawn Heisey commented on SOLR-13003:
-

ConcurrentLRUCache code is identical between 7.3.1 and master.  So there are no 
bugs fixed after the version you're running.

If the entry object in the cache is not an instance of the Accountable 
interface, then ConcurrentLRUCache assumes it to be 192 bytes.  This might be 
very small compared to actual cache entries.  At 192 bytes, 64MB can handle 
over 300,000 entries.

A DocList does implement Accountable.  SolrCache and QueryResultKey do not.  
I'm betting that the DocList instances are what the cache will be using to 
calculate its size.

In that expanded tree that you shared, the "ramBytes" field seems to be saying 
that the cache is tracking its size as about 7 megabytes.  Which suggests that 
maybe the code to calculate the cache size is not working right.

Should probably come up with a custom solr-core jar where ConcurrentLRUCache 
has extra logging, and see if we can track down where the problem is.  It'll 
get extremely verbose.


> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: lrucacheexpanded.png, lrucachemaxmb.png, solrconfig.xml
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694114#comment-16694114
 ] 

Shawn Heisey commented on SOLR-13003:
-

ConcurrentLRUCache looks (on first blush) to be coded right, but further 
checking might reveal a problem that wasn't immediately apparent.

The rest of this comment is a side note.  It's not really connected to the 
problem in this issue, but I came across it while looking into the code, so I 
wanted to note it.

This bit of code in FastLRUCache looks problematic to me:

{code:java}
if (maxRamBytes != Long.MAX_VALUE)  {
  int ramLowerWatermark = (int) (maxRamBytes * 0.8);
  description = generateDescription(maxRamBytes, ramLowerWatermark, 
newThread);
  cache = new ConcurrentLRUCache(ramLowerWatermark, maxRamBytes, 
newThread, null);
} else  {
  description = generateDescription(limit, initialSize, minLimit, 
acceptableLimit, newThread);
  cache = new ConcurrentLRUCache<>(limit, minLimit, acceptableLimit, 
initialSize, newThread, false, null);
}
{code}

The reason I think it's problematic:  The 80 percent calculation is converted 
to an int ... which means that the maximum that the low watermark can be is 2GB 
... that's incorrect if you have requested a large enough max size.

I did some test code with that calculation.  With an input of 8589934592, which 
is 8GB, the value for the 80 percent calculation converted to an int is 
2147483647.  Far less than 80 percent.  I think that ramLowerWatermark should 
be a long, and the cast to int shouldn't be there.


> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: lrucachemaxmb.png
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13003) Query Result Cache does not honour maxRamBytes parameter

2018-11-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694102#comment-16694102
 ] 

Shawn Heisey commented on SOLR-13003:
-

What *EXACTLY* do you have in solrconfig.xml?  Can you share the entire file?

So far, this is sounding like a support issue -- and this issue tracker is not 
for support requests.

I would like to see what is shown if you open up the "cache" line (type 
ConcurrentLRUCache) -- the one that shows 1.5GB.


> Query Result Cache does not honour maxRamBytes parameter
> 
>
> Key: SOLR-13003
> URL: https://issues.apache.org/jira/browse/SOLR-13003
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
>Reporter: Cetra Free
>Priority: Major
> Attachments: lrucachemaxmb.png
>
>
> When using the maxRamBytes parameter with the queryResultCache directive, we 
> have seen the retained size of the cache orders of magnitude larger than what 
> is configured.
> Please see attached VisualVM output which shows the retained size is about 
> 1.5gb, but the maxRamBytes is set to 64mb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12999) Index replication could delete segments first

2018-11-19 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692005#comment-16692005
 ] 

Shawn Heisey commented on SOLR-12999:
-

As noted by Erick these statements should be true:

On *nix, Lucene/Solr can successfully delete any file, but until the current 
searcher closes them, the space won't be released.  On Windows, trying to 
delete files that a searcher has open will fail.

I like the idea, but for these reasons, I don't think it will actually work.


> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Priority: Major
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would only be useful if 
> there is no SolrIndexSearcher open, since it would prevent the removal of 
> files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12985) ClassNotFound indexing crypted documents

2018-11-13 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685747#comment-16685747
 ] 

Shawn Heisey edited comment on SOLR-12985 at 11/13/18 10:14 PM:


And the entire solr.log file generated when you start Solr with the core added 
and initiate the action that generates the error.


was (Author: elyograg):
And the entire solr.log file generated when you start Solr with the core added.

> ClassNotFound indexing crypted documents
> 
>
> Key: SOLR-12985
> URL: https://issues.apache.org/jira/browse/SOLR-12985
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.3.1
>Reporter: Luca
>Priority: Critical
>
> When indexing a BLOB containing an encrypted Office Document (xls or xlsx but 
> I think all types) it fail with a very bad exception, if the document is not 
> encrypted works fine.
> I'm using the DataImportHandler.
> The exception seems also avoid the onError=skip or continue, making the 
> import fail.
> I tried to move the libraries from contrib/extraction/lib/ to server/lib and 
> the unfounded class changes, so it's a class loading issue.
> This is the base exception:
> Exception while processing: document_index document : 
> SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, 
> title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, 
> abstract= Azioni di recupero intraprese sulle Fatture telefoniche, 
> insert_date=2019-09-28 00:00:00.0, type=Documenti, 
> url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: 
> Unable to read content Processing Document # 1
>     at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69)
>     at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171)
>     at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal 
> IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286)
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
>     at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143)
>     at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165)
>     ... 10 more
> Caused by: java.io.IOException: java.lang.ClassNotFoundException: 
> org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102)
>     at 
> org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203)
>     at 
> org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132)
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
>     ... 13 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     at 
> org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148)
>     ... 17 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org

[jira] [Commented] (SOLR-12985) ClassNotFound indexing crypted documents

2018-11-13 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685747#comment-16685747
 ] 

Shawn Heisey commented on SOLR-12985:
-

And the entire solr.log file generated when you start Solr with the core added.

> ClassNotFound indexing crypted documents
> 
>
> Key: SOLR-12985
> URL: https://issues.apache.org/jira/browse/SOLR-12985
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.3.1
>Reporter: Luca
>Priority: Critical
>
> When indexing a BLOB containing an encrypted Office Document (xls or xlsx but 
> I think all types) it fail with a very bad exception, if the document is not 
> encrypted works fine.
> I'm using the DataImportHandler.
> The exception seems also avoid the onError=skip or continue, making the 
> import fail.
> I tried to move the libraries from contrib/extraction/lib/ to server/lib and 
> the unfounded class changes, so it's a class loading issue.
> This is the base exception:
> Exception while processing: document_index document : 
> SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, 
> title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, 
> abstract= Azioni di recupero intraprese sulle Fatture telefoniche, 
> insert_date=2019-09-28 00:00:00.0, type=Documenti, 
> url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: 
> Unable to read content Processing Document # 1
>     at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69)
>     at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171)
>     at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal 
> IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286)
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
>     at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143)
>     at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165)
>     ... 10 more
> Caused by: java.io.IOException: java.lang.ClassNotFoundException: 
> org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102)
>     at 
> org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203)
>     at 
> org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132)
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
>     ... 13 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     at 
> org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148)
>     ... 17 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12985) ClassNotFound indexing crypted documents

2018-11-13 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685601#comment-16685601
 ] 

Shawn Heisey commented on SOLR-12985:
-

We will need all the config files from your core, and a description of any 
modifications you have made to the file/directory structure of the Solr install.

> ClassNotFound indexing crypted documents
> 
>
> Key: SOLR-12985
> URL: https://issues.apache.org/jira/browse/SOLR-12985
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.3.1
>Reporter: Luca
>Priority: Critical
>
> When indexing a BLOB containing an encrypted Office Document (xls or xlsx but 
> I think all types) it fail with a very bad exception, if the document is not 
> encrypted works fine.
> I'm using the DataImportHandler.
> The exception seems also avoid the onError=skip or continue, making the 
> import fail.
> I tried to move the libraries from contrib/extraction/lib/ to server/lib and 
> the unfounded class changes, so it's a class loading issue.
> This is the base exception:
> Exception while processing: document_index document : 
> SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, 
> title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, 
> abstract= Azioni di recupero intraprese sulle Fatture telefoniche, 
> insert_date=2019-09-28 00:00:00.0, type=Documenti, 
> url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: 
> Unable to read content Processing Document # 1
>     at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69)
>     at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171)
>     at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal 
> IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286)
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
>     at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143)
>     at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165)
>     ... 10 more
> Caused by: java.io.IOException: java.lang.ClassNotFoundException: 
> org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102)
>     at 
> org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203)
>     at 
> org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132)
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
>     ... 13 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     at 
> org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148)
>     ... 17 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12985) ClassNotFound indexing crypted documents

2018-11-13 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-12985.
-
Resolution: Invalid

The Solr project does not use Jira as a support portal.

Problems should be discussed via normal support channels, to determine whether 
an issue in Jira is appropriate.  This information is displayed prominently in 
a red font anytime somebody begins to open an issue in the SOLR project.

This is likely a problem specific to your install, not a bug in Solr.

The class in question is contained in this jar, at this path location in the 
most recent Solr release:
contrib/extraction/lib/poi-ooxml-3.17.jar

If you do not know how to make sure this jar is loaded, please ask for help via 
either the solr-user mailing list or the IRC channel.

http://lucene.apache.org/solr/community.html#mailing-lists-irc

If it is determined that there actually is a bug in Solr, then we can re-open 
this issue, or open a new one.

> ClassNotFound indexing crypted documents
> 
>
> Key: SOLR-12985
> URL: https://issues.apache.org/jira/browse/SOLR-12985
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.3.1
>Reporter: Luca
>Priority: Critical
>
> When indexing a BLOB containing an encrypted Office Document (xls or xlsx but 
> I think all types) it fail with a very bad exception, if the document is not 
> encrypted works fine.
> I'm using the DataImportHandler.
> The exception seems also avoid the onError=skip or continue, making the 
> import fail.
> I tried to move the libraries from contrib/extraction/lib/ to server/lib and 
> the unfounded class changes, so it's a class loading issue.
> This is the base exception:
> Exception while processing: document_index document : 
> SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, 
> title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, 
> abstract= Azioni di recupero intraprese sulle Fatture telefoniche, 
> insert_date=2019-09-28 00:00:00.0, type=Documenti, 
> url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: 
> Unable to read content Processing Document # 1
>     at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69)
>     at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171)
>     at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364)
>     at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485)
>     at 
> org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal 
> IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286)
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
>     at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143)
>     at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165)
>     ... 10 more
> Caused by: java.io.IOException: java.lang.ClassNotFoundException: 
> org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150)
>     at 
> org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102)
>     at 
> org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203)
>     at 
> org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132)
>     at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
>     ... 13 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     at 
> org.eclipse.jetty.weba

[jira] [Commented] (SOLR-12639) Umbrella JIRA for adding support HTTP/2, jira/http2

2018-11-12 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684305#comment-16684305
 ] 

Shawn Heisey commented on SOLR-12639:
-

There was an email today on the Jetty list about the latest release.  The 
important part of that email said this:

{noformat}
Users on Java 11 runtimes and users of HTTP/2 are encouraged to upgrade as soon 
as they are able.
{noformat}

Here's the list of changes:

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.13.v2018


> Umbrella JIRA for adding support HTTP/2, jira/http2
> ---
>
> Key: SOLR-12639
> URL: https://issues.apache.org/jira/browse/SOLR-12639
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: http2
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> This ticket will aim to replace/add support of http2 by using jetty HTTP 
> client to Solr. All the works will be committed to jira/http2 branch. This 
> branch will be served like a stepping stone between the master branch and 
> Mark Miller starburst branch. I will try to keep jira/http2 as close as 
> master as possible (this will make merging in the future easier). In the same 
> time, changes in starburst branch will be split into smaller/testable parts 
> then push it to jira/http2 branch. 
> Anyone who interests at http2 for Solr can use jira/http2 branch but there is 
> no backward-compatibility guarantee.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12639) Umbrella JIRA for adding support HTTP/2, jira/http2

2018-11-12 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684305#comment-16684305
 ] 

Shawn Heisey edited comment on SOLR-12639 at 11/12/18 7:54 PM:
---

There was an email today on the Jetty list about the latest release.  The 
important part of that email (for this issue) said this:

{noformat}
Users on Java 11 runtimes and users of HTTP/2 are encouraged to upgrade as soon 
as they are able.
{noformat}

Here's the list of changes:

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.13.v2018



was (Author: elyograg):
There was an email today on the Jetty list about the latest release.  The 
important part of that email said this:

{noformat}
Users on Java 11 runtimes and users of HTTP/2 are encouraged to upgrade as soon 
as they are able.
{noformat}

Here's the list of changes:

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.13.v2018


> Umbrella JIRA for adding support HTTP/2, jira/http2
> ---
>
> Key: SOLR-12639
> URL: https://issues.apache.org/jira/browse/SOLR-12639
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: http2
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> This ticket will aim to replace/add support of http2 by using jetty HTTP 
> client to Solr. All the works will be committed to jira/http2 branch. This 
> branch will be served like a stepping stone between the master branch and 
> Mark Miller starburst branch. I will try to keep jira/http2 as close as 
> master as possible (this will make merging in the future easier). In the same 
> time, changes in starburst branch will be split into smaller/testable parts 
> then push it to jira/http2 branch. 
> Anyone who interests at http2 for Solr can use jira/http2 branch but there is 
> no backward-compatibility guarantee.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12974) RandomSort not consistent in SolrCloud Mode

2018-11-08 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16680981#comment-16680981
 ] 

Shawn Heisey commented on SOLR-12974:
-

One thing you can do for a workaround is to upgrade to 7.x and use the new TLOG 
or PULL replica types.  Downside to this is that it requires upgrading to a new 
major version.  If you have a test environment, that may not be a major problem.

I suspect that it would be very difficult to guarantee the same index version 
when using NRT replicas, which was the only type before 7.x.  I could be wrong 
about that.


> RandomSort not consistent in SolrCloud Mode
> ---
>
> Key: SOLR-12974
> URL: https://issues.apache.org/jira/browse/SOLR-12974
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.5.1
>Reporter: Shrey Shivam
>Priority: Minor
>
> Expected behaviour of RandomSort is that given the same random field name 
> (random_) which acts a seed, the sorting order will remain consistent 
> with the same version of Solr Index.
> From schema.xml:
> {{~~}}
>  
> In master slave mode, replication happens based on index version. If version 
> number of slave is different than that of master, replication is done by 
> slaves and the index number is updated to match the index version of master.
> However in SolrCloud mode, observation has been that replicas of the same 
> shard do not maintain the same version number at all times even though the 
> documents are same and consistent. 
> This has been previously discussed in [mailing list 
> |https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201508.mbox/%3ccae3utzmggprv-p6juwjwm2yyyxfw893xayq7+2hav7mmobm...@mail.gmail.com%3E]as
>  well.
> {quote}SolrCloud works very differently than the old master-slave replication.
> The index is NOT copied from the leader to the other replicas, except
>  in extreme recovery circumstances.
> Each replica builds its own copy of the index independently from the
>  others. Due to slight timing differences in the indexing operations,
>  and possible actions related to transaction log replay on node restart,
>  each replica may end up with a different index layout. There also could
>  be differences in the number of deleted documents. Unless something
>  goes really wrong, all replicas should contain the same live documents.
> {quote}
>  
> When a query to a shard is made which has 2 or more replicas, any replica is 
> chosen to respond to the query. Now, if all replicas do not have the same 
> index number, RandomSort will generate random hash seed differently for the 
> same random_ field name.
> In the source code of 
> [RandomSort|https://github.com/apache/lucene-solr/blob/branch_6_5/solr/core/src/java/org/apache/solr/schema/RandomSortField.java]
>  class, in line 86, it mentions the use of index version (of shard) to create 
> random hash seed.
> Hence when querying a Solr Collection, for the same query, Solr is giving 
> different results depending on version mismatch in replicas as well as based 
> on which replica is serving request each time.
>  
> Example of Solr Query where random field is being used:
> {code:java}
> https://solr-stage.mydomain.com:8983/solr/mycollection/select?wt=json&q=*:*&defType=edismax&fl=id&boost=if(query({!v='documentDate:[2018-11-07
>  TO 
> *]'}),sum(div(scale(random_SW84gaDAf3RynhOyGQDZlgAAAYc1,0,1),1),sub(1,div(1,1))),if(or(exists(query({!v='documentType:sponsored'})),exists(query({!v='documentType:featured'}))),sum(div(scale(random_SW84gaDAf3RynhOyGQDZlgAAAYc1,0,1),4),sub(1,div(1,4))),
>  
> if(or(exists(query({!v='documentType:listing'})),exists(query({!v='documentType:promotional'}))),sum(div(scale(random_SW84gaDAf3RynhOyGQDZlgAAAYc1,0,1),2),sub(1,div(1,2))),scale(random_SW84gaDAf3RynhOyGQDZlgAAAYc1,0,1
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12967) MOVEREPLICA converting replica to NRT

2018-11-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677352#comment-16677352
 ] 

Shawn Heisey commented on SOLR-12967:
-

In general, I completely agree that MOVEREPLICA should preserve the replica 
type that already exists.

But when I was thinking about the idea where somebody could specify a default 
replica type, I wondered if some people might want that to override what things 
like MOVEREPLICA do by default.  I'm not sure that such an option should be 
implemented, but I did think of it.

[~gilson.nascimento] also noticed that UTILIZENODE created NRT replicas.  Which 
might really be the same problem -- it would be reasonable for UTILIZENODE to 
be implemented internally as MOVEREPLICA.

> MOVEREPLICA converting replica to NRT
> -
>
> Key: SOLR-12967
> URL: https://issues.apache.org/jira/browse/SOLR-12967
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gilson
>Priority: Minor
>  Labels: collection-api, solr
>
> When calling Collections API's MOVEREPLICA, the new replica created is always 
> NRT type, even when the original replica is PULL or TLOG. As discussed on 
> IRC, it should use the source replica type, or provide a parameter for the 
> user to choose the new replica's type, similar to ADDREPLICA  parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12967) MOVEREPLICA converting replica to NRT

2018-11-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677188#comment-16677188
 ] 

Shawn Heisey commented on SOLR-12967:
-

I advised Gilson to open this issue in the #solr channel.

Do we need separate issues for work on other Collections API actions that don't 
consider the replica type, or will we just expand this issue to cover checking 
the whole API?

I had a thought for a feature request -- add a couple of new settings:  1) a 
default replica type, to be used instead of NRT when nothing else indicates 
what type to use.  2) A flag to indicate whether the default replica type 
should override an existing type, which would cover things like MOVEREPLICA and 
maybe others.  When the user's request explicitly asks for a type, that would 
of course take precedence over both of these settings.

> MOVEREPLICA converting replica to NRT
> -
>
> Key: SOLR-12967
> URL: https://issues.apache.org/jira/browse/SOLR-12967
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gilson
>Priority: Minor
>  Labels: collection-api, solr
>
> When calling Collections API's MOVEREPLICA, the new replica created is always 
> NRT type, even when the original replica is PULL or TLOG. As discussed on 
> IRC, it should use the source replica type, or provide a parameter for the 
> user to choose the new replica's type, similar to ADDREPLICA  parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675554#comment-16675554
 ] 

Shawn Heisey commented on SOLR-12955:
-

My only real concern is how existing configs are handled.

What happens if somebody upgrades a Solr where DistributedUpdateProcessor is 
being used in their config?  Will it continue to work as before and just log a 
deprecation warning? If they upgrade to a new major version and don't change to 
the appropriate new class, would their core(s) fail to start?  Those two 
courses of action would be IMHO the best way to go.


> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-11-05 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675134#comment-16675134
 ] 

Shawn Heisey commented on SOLR-12243:
-

Is there an expected timeframe for this to be committed?  Anything I can do to 
accelerate it?

Working with the indexes experiencing the problem and would like to get back to 
"official" binary Solr instead of a custom build.  I checked the latest patch 
on branch_7x and it looks like it's working as expected.


> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, multiword-synonyms.txt, 
> schema.xml, solrconfig.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12952) TFIDF scorer uses max docs instead of num docs when using Edismax

2018-11-01 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671568#comment-16671568
 ] 

Shawn Heisey commented on SOLR-12952:
-

Lucene scoring has always been influenced by deleted docs.  SolrCloud's NRT 
replica type has always had the potential for this problem, because different 
replicas having different numbers of deleted documents has always been a 
possibility.  If you use TLOG/PULL replica types (new in 7.x) then all replicas 
will be absolutely identical, and this can't happen.

It is not a bug.  I don't know if eliminating the influence of deleted 
documents on the scores in a query is even *possible*.  Attempting to do it 
would kill performance.

> TFIDF scorer uses max docs instead of num docs when using Edismax
> -
>
> Key: SOLR-12952
> URL: https://issues.apache.org/jira/browse/SOLR-12952
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
>
> I have recently noticed some odd behavior while using the edismax query 
> parser.
> The scores returned by documents seem to be affected by deleted documents, 
> which have yet to be merged and completely removed from the index.
> This causes different shards to return different scores for the same query.
> Is this a bug, or am I missing something?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-29 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16667210#comment-16667210
 ] 

Shawn Heisey commented on SOLR-12848:
-

Once you've got the patch applied, exactly how to proceed will depend on what 
you want to do.  Typing "ant clean server" in the solr directory will create a 
runnable server (bin/solr start works).  Typing "ant clean package" will create 
binary packages just like those you can download from the Solr website, but 
will likely have -SNAPSHOT in the version name.


> SolrJ does not use HTTP proxy anymore
> -
>
> Key: SOLR-12848
> URL: https://issues.apache.org/jira/browse/SOLR-12848
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.5
>Reporter: Andreas Hubold
>Priority: Major
>  Labels: httpclient
> Attachments: SOLR-12848.patch, SOLR-12848.patch
>
>
> SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
> properties http.proxyHost and http.proxyPort. This used to work with Solr 
> 6.6.5.
> Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
> hood, which took system properties for HTTP proxy config into account. The 
> deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
> SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
> an HttpClient, but it does not call #useSystemProperties on the builder. 
> Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-29 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16667204#comment-16667204
 ] 

Shawn Heisey edited comment on SOLR-12848 at 10/29/18 1:33 PM:
---

bq. not sure how to install it now, if you can give me a hint

You can't install a patch on a binary download of Solr.  You have to obtain the 
source code for the specific version you want to patch, apply the patch, and 
then build Solr.

https://wiki.apache.org/solr/HowToContribute

The wiki page linked above describes how to get the source code and figure out 
which branch you want.  It also has some information about working with 
patches, although it doesn't have any info on applying them with git, which I 
prefer to do when possible.  The basic method for that is to change directory 
to the root of the source code and then type "git apply /path/to/X.patch".


was (Author: elyograg):
bq. not sure how to install it now, if you can give me a hint

You can't install a patch on a binary download of Solr.  You have to obtain the 
source code for the specific version you want to patch, apply the patch, and 
then build Solr.

This wiki page describes how to get the source code and figure out which branch 
you want.  It also has some information about working with patches, although it 
doesn't have any info on applying them with git, which I prefer to do when 
possible.  The basic method for that is to change directory to the root of the 
source code and then type "git apply /path/to/X.patch".

> SolrJ does not use HTTP proxy anymore
> -
>
> Key: SOLR-12848
> URL: https://issues.apache.org/jira/browse/SOLR-12848
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.5
>Reporter: Andreas Hubold
>Priority: Major
>  Labels: httpclient
> Attachments: SOLR-12848.patch, SOLR-12848.patch
>
>
> SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
> properties http.proxyHost and http.proxyPort. This used to work with Solr 
> 6.6.5.
> Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
> hood, which took system properties for HTTP proxy config into account. The 
> deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
> SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
> an HttpClient, but it does not call #useSystemProperties on the builder. 
> Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-29 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16667204#comment-16667204
 ] 

Shawn Heisey commented on SOLR-12848:
-

bq. not sure how to install it now, if you can give me a hint

You can't install a patch on a binary download of Solr.  You have to obtain the 
source code for the specific version you want to patch, apply the patch, and 
then build Solr.

This wiki page describes how to get the source code and figure out which branch 
you want.  It also has some information about working with patches, although it 
doesn't have any info on applying them with git, which I prefer to do when 
possible.  The basic method for that is to change directory to the root of the 
source code and then type "git apply /path/to/X.patch".

> SolrJ does not use HTTP proxy anymore
> -
>
> Key: SOLR-12848
> URL: https://issues.apache.org/jira/browse/SOLR-12848
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.5
>Reporter: Andreas Hubold
>Priority: Major
>  Labels: httpclient
> Attachments: SOLR-12848.patch, SOLR-12848.patch
>
>
> SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
> properties http.proxyHost and http.proxyPort. This used to work with Solr 
> 6.6.5.
> Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
> hood, which took system properties for HTTP proxy config into account. The 
> deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
> SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
> an HttpClient, but it does not call #useSystemProperties on the builder. 
> Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12882) Eliminate excessive lambda allocation in FacetFieldProcessorByHashDV.collectValFirstPhase

2018-10-28 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1424#comment-1424
 ] 

Shawn Heisey commented on SOLR-12882:
-

>From 2018-10-24 on the #solr-dev IRC channel:

{noformat}
12:22 < tpunder> I have a few Solr Issues I'd like to get reviewed/merged 
(SOLR-12875, SOLR-12878, SOLR-12882, SOLR-12880).  What's the best way to go 
about doing that?
{noformat}

These issues look very compelling, especially SOLR-12878.  We've been fighting 
facet performance regression for a while now.  If I had even a sliver of 
understanding of the code you're working on, I would help you.  You might want 
to ping the dev list to raise visibility.

> Eliminate excessive lambda allocation in 
> FacetFieldProcessorByHashDV.collectValFirstPhase
> -
>
> Key: SOLR-12882
> URL: https://issues.apache.org/jira/browse/SOLR-12882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV.collectValFirstPhase method looks like this:
> {noformat}
> private void collectValFirstPhase(int segDoc, long val) throws IOException {
>  int slot = table.add(val); // this can trigger a rehash
>  // Our countAcc is virtual, so this is not needed:
>  // countAcc.incrementCount(slot, 1);
> super.collectFirstPhase(segDoc, slot, slotNum ->
> { Comparable value = calc.bitsToValue(val); return new 
> SlotContext(sf.getType().getFieldQuery(null, sf, calc.formatValue(value))); }
> );
> }
> {noformat}
>  
> For each value that is being iterated over there is a lambda allocation that 
> is passed as the slotContext argument to the super.collectFirstPhase method. 
> The lambda can be factored out such that there is only a single instance per 
> FacetFieldProcessorByHashDV instance. 
> The only tradeoff being that the value needs to be looked up from the table 
> in the lambda.  However looking the value up in the table is going to be less 
> expensive than a memory allocation and also the slotContext lambda is only 
> used in RelatednessAgg and not for any of the field faceting or metrics.
>  
> {noformat}
> private void collectValFirstPhase(int segDoc, long val) throws IOException {
>   int slot = table.add(val); // this can trigger a rehash
>   // Our countAcc is virtual, so this is not needed:
>   // countAcc.incrementCount(slot, 1);
>   super.collectFirstPhase(segDoc, slot, slotContext);
> }
> /**
>  * SlotContext to use during all {@link SlotAcc} collection.
>  *
>  * This avoids a memory allocation for each invocation of 
> collectValFirstPhase.
>  */
> private IntFunction slotContext = (slotNum) -> {
>   long val = table.vals[slotNum];
>   Comparable value = calc.bitsToValue(val);
>   return new SlotContext(sf.getType().getFieldQuery(null, sf, 
> calc.formatValue(value)));
> };
> {noformat}
>  
> FacetFieldProcessorByArray already follows this same pattern



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12930) Add great developer documentation for writing tests.

2018-10-27 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1199#comment-1199
 ] 

Shawn Heisey commented on SOLR-12930:
-

bq. The idea of a single file was about taking all those smaller files we write 
and publishing them as a single HTML page that is the thing make public

If the overall guide's size and organizational strategy will generate a useful 
document as a single published page, whether it's multiple source files or a 
single file, then I'm all for it.  I suspect that as the dev guide evolves and 
grows, eventually it will work better as a handful of pages ... but I could be 
completely wrong about that.  Your point about the need for navigation when 
there are multiple pages is spot on.

bq. The basic infrastructure I created in my small example project 

Slightly ashamed to admit that I haven't actually looked at it.  A bit 
overwhelmed with real-world happenings, so I am skimming things that I should 
probably be studying in-depth.

bq. There's a little bit of complexity there in terms of changing our usual 
workflows, but maybe that's worth it

Which if I'm not mistaken is the gist of the effort that 
[~markrmil...@gmail.com] is trying to spearhead.  I think everyone is more or 
less on the same page.  I am encouraged when discussions reveal a rough 
consensus for a reasonably clear path forward.

bq. The Gitbox transition is interesting in this context.

Digesting that and trying to think ahead to possible gotchas and benefits 
(challenging when my knowledge is incomplete), I had a slight worry that the 
"issues" section of github would become a distraction.  I found that the 
apache/lucene-solr mirror currently on github doesn't have that feature turned 
on, so the link isn't even there.  I have in mind a small change to README.md 
for clarification purposes on issue tracking -- noting that the omission of 
"issues" in github is intentional because we use Apache Jira for that.  If I 
can find a moment to whip up a patch, I'll attach it here.  One thing I wonder 
is whether we can have github show the "issues" link, but instead of having the 
feature actually activated, have the link go to a page (markdown?) about Jira.


> Add great developer documentation for writing tests.
> 
>
> Key: SOLR-12930
> URL: https://issues.apache.org/jira/browse/SOLR-12930
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
> Attachments: solr-dev-docs.zip
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12930) Add great developer documentation for writing tests.

2018-10-27 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16665977#comment-16665977
 ] 

Shawn Heisey commented on SOLR-12930:
-

bq.  As for the details of whether it's a section of the standard ref guide or 
a sibling directory or... 

I think the reference guide needs to be for end users and not include things 
being discussed here.  I've got no interest in hiding the information from 
users, but I think it would be out of place in the reference guide and be 
confusing to novices.

So my thought is a sibling dev guide.  It certainly wouldn't be as big or as 
complex as the ref guide, and although I don't think we necessarily need to 
have it all on one page, there probably shouldn't be very many total pages, at 
least in the early days.  Later, we might decide on some kind of organizational 
concept which changes that assumption.

Something else to think about:  Do we want to move the complex moving parts for 
building the guide so a dev guide could be built that includes things like 
ReleaseToDo and similar best practices for Lucene?  Would we want to have 
separate dev guides for both halves of the project?

bq. Actually, GitHub renders Asciidoc right in the source browse

That's really awesome.

TL;DR:  One of my past conversations with Infra yielded a tidbit which I did 
mention on dev@l.a.o at one point:  At some point in the relatively near 
future, all git repositories at Apache will be migrated from the git-wip system 
to gitbox.  I don't know the details, but apparently gitbox includes some deep 
integration with github.  I wonder if there might be something interesting to 
be done with github's asciidoc rendering.

> Add great developer documentation for writing tests.
> 
>
> Key: SOLR-12930
> URL: https://issues.apache.org/jira/browse/SOLR-12930
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
> Attachments: solr-dev-docs.zip
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12894) Solr documention for Java Vendors

2018-10-23 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16661475#comment-16661475
 ] 

Shawn Heisey commented on SOLR-12894:
-

bq. we shouldn't be in the business of recommending one over the other. All we 
should say is both OpenJDK and Oracle JDK are well tested and both work fine.

Sounds good.  Add something like "You'll want to be sure that the license for 
the Java version you choose will meet your needs" ... to be re-worded as 
necessary so it flows well.  So we draw attention to licensing without an 
explicit recommendation.


> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12894) Solr documention for Java Vendors

2018-10-22 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16659849#comment-16659849
 ] 

Shawn Heisey commented on SOLR-12894:
-

Since Oracle has changed the license with version 11, we should probably start 
recommending OpenJDK, while stating that Oracle Java will work.


> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12889) Clean up CoreAdmin behavior and responses when acting on cores that failed to initialize

2018-10-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16658017#comment-16658017
 ] 

Shawn Heisey commented on SOLR-12889:
-

I'm really liking the idea of adding a LIST action.  That would allow us to 
have the same commit for all branches.  Optionally, we could deprecate STATUS 
and remove it in the next major version, with LIST as the replacement.

> Clean up CoreAdmin behavior and responses when acting on cores that failed to 
> initialize
> 
>
> Key: SOLR-12889
> URL: https://issues.apache.org/jira/browse/SOLR-12889
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Minor
>
> Solr isn't behaving quite correctly when performing CoreAdmin actions on 
> cores that exist, but failed to initialize.
>  * RELOAD works. That was made possible by SOLR-10021.
>  * UNLOAD works, and can even delete directories if asked to.
>  * RENAME works, but Solr must be restarted for the admin UI to reflect the 
> new name in the "SolrCore Initialization Failures" message.
>  * SWAP doesn't actually work, but returns a response that *LOOKS* like it 
> worked.
> I didn't try the other actions, because it doesn't really make any sense to 
> allow those on a core that failed.
> What I see as things that need to be checked or implemented when acting on 
> failed cores:
>  * SWAP
>  ** Fail fast.
>  ** OR make it work properly. If we choose this, adjust the core name in the 
> initFailures part of the STATUS response.
>  * RENAME
>  ** Fail fast.
>  ** OR make it work properly. If we choose this, adjust the core name in the 
> initFailures part of the STATUS response.
>  * UNLOAD
>  ** This looks like it behaves correctly.  Tried it with 
> deleteInstanceDir=true and it did wipe out the whole core.
>  * Other actions not already mentioned
>  ** Fail fast
> Something else to consider:  Get rid of the initFailures part of the STATUS 
> response.  List all cores, even those that failed.  Include a boolean item in 
> the response to indicate whether initialization succeeded, and only list some 
> of the full information for a failed core.  This would make implementing 
> SOLR-12863 easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12889) Clean up CoreAdmin behavior and responses when acting on cores that failed to initialize

2018-10-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16658014#comment-16658014
 ] 

Shawn Heisey edited comment on SOLR-12889 at 10/20/18 10:31 PM:


The last part of the proposal, removing initFailures and just including failed 
cores in the regular core list, probably needs to be a master-only change.  An 
alternate idea for that: implement a LIST action that does what was proposed, 
and leave STATUS as it is.


was (Author: elyograg):
The last part of the proposal, removing initFailures and just including failed 
cores in the regular core list, probably needs to be a master-only change.

> Clean up CoreAdmin behavior and responses when acting on cores that failed to 
> initialize
> 
>
> Key: SOLR-12889
> URL: https://issues.apache.org/jira/browse/SOLR-12889
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Minor
>
> Solr isn't behaving quite correctly when performing CoreAdmin actions on 
> cores that exist, but failed to initialize.
>  * RELOAD works. That was made possible by SOLR-10021.
>  * UNLOAD works, and can even delete directories if asked to.
>  * RENAME works, but Solr must be restarted for the admin UI to reflect the 
> new name in the "SolrCore Initialization Failures" message.
>  * SWAP doesn't actually work, but returns a response that *LOOKS* like it 
> worked.
> I didn't try the other actions, because it doesn't really make any sense to 
> allow those on a core that failed.
> What I see as things that need to be checked or implemented when acting on 
> failed cores:
>  * SWAP
>  ** Fail fast.
>  ** OR make it work properly. If we choose this, adjust the core name in the 
> initFailures part of the STATUS response.
>  * RENAME
>  ** Fail fast.
>  ** OR make it work properly. If we choose this, adjust the core name in the 
> initFailures part of the STATUS response.
>  * UNLOAD
>  ** This looks like it behaves correctly.  Tried it with 
> deleteInstanceDir=true and it did wipe out the whole core.
>  * Other actions not already mentioned
>  ** Fail fast
> Something else to consider:  Get rid of the initFailures part of the STATUS 
> response.  List all cores, even those that failed.  Include a boolean item in 
> the response to indicate whether initialization succeeded, and only list some 
> of the full information for a failed core.  This would make implementing 
> SOLR-12863 easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12889) Clean up CoreAdmin behavior and responses when acting on cores that failed to initialize

2018-10-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16658014#comment-16658014
 ] 

Shawn Heisey commented on SOLR-12889:
-

The last part of the proposal, removing initFailures and just including failed 
cores in the regular core list, probably needs to be a master-only change.

> Clean up CoreAdmin behavior and responses when acting on cores that failed to 
> initialize
> 
>
> Key: SOLR-12889
> URL: https://issues.apache.org/jira/browse/SOLR-12889
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Minor
>
> Solr isn't behaving quite correctly when performing CoreAdmin actions on 
> cores that exist, but failed to initialize.
>  * RELOAD works. That was made possible by SOLR-10021.
>  * UNLOAD works, and can even delete directories if asked to.
>  * RENAME works, but Solr must be restarted for the admin UI to reflect the 
> new name in the "SolrCore Initialization Failures" message.
>  * SWAP doesn't actually work, but returns a response that *LOOKS* like it 
> worked.
> I didn't try the other actions, because it doesn't really make any sense to 
> allow those on a core that failed.
> What I see as things that need to be checked or implemented when acting on 
> failed cores:
>  * SWAP
>  ** Fail fast.
>  ** OR make it work properly. If we choose this, adjust the core name in the 
> initFailures part of the STATUS response.
>  * RENAME
>  ** Fail fast.
>  ** OR make it work properly. If we choose this, adjust the core name in the 
> initFailures part of the STATUS response.
>  * UNLOAD
>  ** This looks like it behaves correctly.  Tried it with 
> deleteInstanceDir=true and it did wipe out the whole core.
>  * Other actions not already mentioned
>  ** Fail fast
> Something else to consider:  Get rid of the initFailures part of the STATUS 
> response.  List all cores, even those that failed.  Include a boolean item in 
> the response to indicate whether initialization succeeded, and only list some 
> of the full information for a failed core.  This would make implementing 
> SOLR-12863 easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12889) Clean up CoreAdmin behavior and responses when acting on cores that failed to initialize

2018-10-20 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12889:

Description: 
Solr isn't behaving quite correctly when performing CoreAdmin actions on cores 
that exist, but failed to initialize.
 * RELOAD works. That was made possible by SOLR-10021.
 * UNLOAD works, and can even delete directories if asked to.
 * RENAME works, but Solr must be restarted for the admin UI to reflect the new 
name in the "SolrCore Initialization Failures" message.
 * SWAP doesn't actually work, but returns a response that *LOOKS* like it 
worked.

I didn't try the other actions, because it doesn't really make any sense to 
allow those on a core that failed.

What I see as things that need to be checked or implemented when acting on 
failed cores:
 * SWAP
 ** Fail fast.
 ** OR make it work properly. If we choose this, adjust the core name in the 
initFailures part of the STATUS response.
 * RENAME
 ** Fail fast.
 ** OR make it work properly. If we choose this, adjust the core name in the 
initFailures part of the STATUS response.
 * UNLOAD
 ** This looks like it behaves correctly.  Tried it with deleteInstanceDir=true 
and it did wipe out the whole core.
 * Other actions not already mentioned
 ** Fail fast

Something else to consider:  Get rid of the initFailures part of the STATUS 
response.  List all cores, even those that failed.  Include a boolean item in 
the response to indicate whether initialization succeeded, and only list some 
of the full information for a failed core.  This would make implementing 
SOLR-12863 easier.


  was:
Solr isn't behaving quite correctly when performing CoreAdmin actions on cores 
that exist, but failed to initialize.
 * RELOAD works. That was made possible by SOLR-10021.
 * UNLOAD works, and can even delete directories if asked to.
 * RENAME works, but Solr must be restarted for the admin UI to reflect the new 
name in the "SolrCore Initialization Failures" message.
 * SWAP doesn't actually work, but returns a response that *LOOKS* like it 
worked.

I didn't try the other actions, because it doesn't really make any sense to 
allow those on a core that failed.

What I see as things that need to be checked or implemented when acting on 
failed cores:
 * SWAP
 ** Fail fast.
 ** OR make it work properly. If we choose this, adjust the core name in the 
initFailures part of the STATUS response.
 * RENAME
 ** Fail fast.
 ** OR make it work properly. If we choose this, adjust the core name in the 
initFailures part of the STATUS response.
 * Other actions not already mentioned
 ** Fail fast

Something else to consider:  Get rid of the initFailures part of the STATUS 
response.  List all cores, even those that failed.  Include a boolean item in 
the response to indicate whether initialization succeeded, and only list some 
of the full information for a failed core.  This would make implementing 
SOLR-12863 easier.



> Clean up CoreAdmin behavior and responses when acting on cores that failed to 
> initialize
> 
>
> Key: SOLR-12889
> URL: https://issues.apache.org/jira/browse/SOLR-12889
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Minor
>
> Solr isn't behaving quite correctly when performing CoreAdmin actions on 
> cores that exist, but failed to initialize.
>  * RELOAD works. That was made possible by SOLR-10021.
>  * UNLOAD works, and can even delete directories if asked to.
>  * RENAME works, but Solr must be restarted for the admin UI to reflect the 
> new name in the "SolrCore Initialization Failures" message.
>  * SWAP doesn't actually work, but returns a response that *LOOKS* like it 
> worked.
> I didn't try the other actions, because it doesn't really make any sense to 
> allow those on a core that failed.
> What I see as things that need to be checked or implemented when acting on 
> failed cores:
>  * SWAP
>  ** Fail fast.
>  ** OR make it work properly. If we choose this, adjust the core name in the 
> initFailures part of the STATUS response.
>  * RENAME
>  ** Fail fast.
>  ** OR make it work properly. If we choose this, adjust the core name in the 
> initFailures part of the STATUS response.
>  * UNLOAD
>  ** This looks like it behaves correctly.  Tried it with 
> deleteInstanceDir=true and it did wipe out the whole core.
>  * Other actions not already mentioned
>  ** Fail fast
> Something else to consider:  Get rid of the initFailures part of the STATUS 
> response.  List all cores, even those that failed.  Include a boolean item in 
> the response to indicate whether initialization succeeded, and only list some 
> of the full information for a failed core.  This wou

[jira] [Commented] (SOLR-12863) Provide a way in the admin UI to reload cores that failed to start

2018-10-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16658012#comment-16658012
 ] 

Shawn Heisey commented on SOLR-12863:
-

I've been trying to decipher the javascript and html for the admin UI and it is 
proving to be difficult.  My javascript knowledge is only a little bit past 
beginner.

The CoreAdmin API action STATUS does return the names of cores that fail to 
initialize, in an "initFailures" section.  I've been doing some tests to see 
what actions can be taken on failed cores, and filed SOLR-12889 for cleaning 
that up.


> Provide a way in the admin UI to reload cores that failed to start
> --
>
> Key: SOLR-12863
> URL: https://issues.apache.org/jira/browse/SOLR-12863
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Major
>  Labels: newdev
>
> Imagine a situation where a core fails to load because there's some kind of 
> minor problem with the configuration for the core/collection.
> Once you've fixed whatever caused the core to fail, there is no way provided 
> in the admin UI to reload the core so it will start working.  The CoreAdmin 
> API can be accessed directly to initiate the RELOAD action, if the user is 
> able to figure out how to do that.  Restarting the entire node would take 
> care of it, potentially with major disruption.
> It would be really good to have cores that fail to initialize show up in the 
> CoreAdmin section of the admin UI, in a different color and with some kind of 
> visual indicator for the color blind, with a limited set of options that 
> includes RELOAD.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12889) Clean up CoreAdmin behavior and responses when acting on cores that failed to initialize

2018-10-20 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-12889:
---

 Summary: Clean up CoreAdmin behavior and responses when acting on 
cores that failed to initialize
 Key: SOLR-12889
 URL: https://issues.apache.org/jira/browse/SOLR-12889
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.5
Reporter: Shawn Heisey


Solr isn't behaving quite correctly when performing CoreAdmin actions on cores 
that exist, but failed to initialize.
 * RELOAD works. That was made possible by SOLR-10021.
 * UNLOAD works, and can even delete directories if asked to.
 * RENAME works, but Solr must be restarted for the admin UI to reflect the new 
name in the "SolrCore Initialization Failures" message.
 * SWAP doesn't actually work, but returns a response that *LOOKS* like it 
worked.

I didn't try the other actions, because it doesn't really make any sense to 
allow those on a core that failed.

What I see as things that need to be checked or implemented when acting on 
failed cores:
 * SWAP
 ** Fail fast.
 ** OR make it work properly. If we choose this, adjust the core name in the 
initFailures part of the STATUS response.
 * RENAME
 ** Fail fast.
 ** OR make it work properly. If we choose this, adjust the core name in the 
initFailures part of the STATUS response.
 * Other actions not already mentioned
 ** Fail fast

Something else to consider:  Get rid of the initFailures part of the STATUS 
response.  List all cores, even those that failed.  Include a boolean item in 
the response to indicate whether initialization succeeded, and only list some 
of the full information for a failed core.  This would make implementing 
SOLR-12863 easier.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12863) Provide a way in the admin UI to reload cores that failed to start

2018-10-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16658000#comment-16658000
 ] 

Shawn Heisey commented on SOLR-12863:
-

Linking to SOLR-10021.  Looks like I was lucky when I was helping the end users 
that ran into this issue.  They might have needed to restart Solr entirely 
after fixing the config problem!

> Provide a way in the admin UI to reload cores that failed to start
> --
>
> Key: SOLR-12863
> URL: https://issues.apache.org/jira/browse/SOLR-12863
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Major
>  Labels: newdev
>
> Imagine a situation where a core fails to load because there's some kind of 
> minor problem with the configuration for the core/collection.
> Once you've fixed whatever caused the core to fail, there is no way provided 
> in the admin UI to reload the core so it will start working.  The CoreAdmin 
> API can be accessed directly to initiate the RELOAD action, if the user is 
> able to figure out how to do that.  Restarting the entire node would take 
> care of it, potentially with major disruption.
> It would be really good to have cores that fail to initialize show up in the 
> CoreAdmin section of the admin UI, in a different color and with some kind of 
> visual indicator for the color blind, with a limited set of options that 
> includes RELOAD.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2018-10-20 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16657934#comment-16657934
 ] 

Shawn Heisey commented on SOLR-7642:


[~janhoy], that seems like a good compromise.

In truth I would prefer to always create the chroot without restriction, so 
SolrCloud is easier to use.

My agreement with the notion of only auto-creating "/solr" was just to address 
fears about the creation of surplus and/or incorrect znodes.

Sometimes we know enough about a situation that we can easily dismiss fears and 
objections to our plan ... hopefully with a rational reason.  Creation of extra 
znodes for incorrect chroots could certainly happen in the wild, and I can't 
say with 100 percent certainty that it would never cause anybody any actual 
problems.  For that reason, I don't think we can ignore it.

If I'm honest, I think the system property to restrict the chroot name is a 
placebo.  But sometimes users need the assurance that a placebo provides, so I 
support the idea.  I wouldn't expect much implementation difficulty.


> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2018-10-19 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16657467#comment-16657467
 ] 

Shawn Heisey commented on SOLR-7642:


bq.  Instead of including the chroot in ZK_HOST, it could bet set separately, 
so it would have to be a conscious decision.

The format of ZK_HOST is dictated by the ZooKeeper project.  We did not come up 
with that format.  We simply pass the string into the ZK code.  It is not ours 
to modify.

> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16656723#comment-16656723
 ] 

Shawn Heisey commented on SOLR-12243:
-

The central problem in this issue was unclear to me, so I asked [~ehaubert] if 
she could explain it.  With that information, I was able to do a test that 
makes it pretty clear.

With a 7.5.0 example setup, I created a "title" field using the default 
text_general fieldType (which uses SynonymGraphFilter at query time), and 
included the two configs provided in the issue description (synonyms and 
handler).  Here's the parsed queries for a couple of examples.  The difference 
here is one includes "dog" which has a multiterm synonym, and the other 
includes "rat" which only has single-term synonyms:

with q=allergic reaction dog
{noformat}
+Synonym(title:allergic title:hypersensitive))^100.0)~0.4
((title:reaction)^100.0)~0.4 ((title:canine (+title:canis +title:familiris)
(+title:k +title:9) title:dog)^100.0)~0.4)~3) () (title:\"(hypersensitive 
allergic) reaction\"~11)~0.4 ()
{noformat}

with q=allergic reaction rat
{noformat}
+Synonym(title:allergic title:hypersensitive))^100.0)~0.4
((title:reaction)^100.0)~0.4 ((Synonym(title:rat title:rattus))^100.0)~0.4)~3)
((title:\"(hypersensitive allergic) reaction (rattus rat)\"~20)^5000.0)~0.4
((title:\"(hypersensitive allergic) reaction\"~11)~0.4 (title:\"reaction 
(rattus rat)\"~11)~0.4)
((title:\"(hypersensitive allergic) reaction (rattus rat)\"~22)^1000.0)~0.4
{noformat}


> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-17 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12243:

Description: 
synonyms.txt:
{code}
allergic, hypersensitive
aspirin, acetylsalicylic acid
dog, canine, canis familiris, k 9
rat, rattus
{code}

request handler:

{code:xml}

 

 edismax
  0.4
 title^100
 title~20^5000
 title~11
 title~22^1000
 text
 
 3<-1 6<-3 9<30%
 *:*
 25


{code}

Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the above 
list will not be generated.

"allergic reaction dog" will generate pf2: "allergic reaction", but not 
pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction dog"

"aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
dose" or pf3:"aspirin dose ?"

 

  was:
synonyms.txt:

allergic, hypersensitive

aspirin, acetylsalicylic acid

dog, canine, canis familiris, k 9

rat, rattus

request handler:


 

 edismax
  0.4
 title^100
 title~20^5000
 title~11
 title~22^1000
 text
 
 3<-1 6<-3 9<30%
 *:*
 25

 

Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the above 
list will not be generated.

"allergic reaction dog" will generate pf2: "allergic reaction", but not 
pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction dog"

"aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
dose" or pf3:"aspirin dose ?"

 


> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12874) Java 9+ GC Log files are being rotated every 20KB instead of every 20MB

2018-10-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650900#comment-16650900
 ] 

Shawn Heisey commented on SOLR-12874:
-

Is 20M properly interpreted by Java 8?  If not, we'll need different options 
for different java versions.

Sounds like there needs to be a bug filed against Java, because all of Oracle's 
documentation that I've been able to locate indicates that the number should be 
interpreted as kilobytes if a unit is not provided.

> Java 9+ GC Log files are being rotated every 20KB instead of every 20MB
> ---
>
> Key: SOLR-12874
> URL: https://issues.apache.org/jira/browse/SOLR-12874
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Java 9+ GC logging options in bin/solr and bin/solr.cmd specify a log 
> rotation file size of 2 which according to JEP 158 
> ([https://openjdk.java.net/jeps/158]) should be the "file size in kb" however 
> when running Solr on Java 11 I'm seeing GC logs rotated every 20KB.
> Changing "filesize=2" to "filesize=20M" fixes the problem for me under 
> Linux.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12869) Unit test stalling

2018-10-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650581#comment-16650581
 ] 

Shawn Heisey commented on SOLR-12869:
-

The big problem with Guava and Solr isn't Solr itself -- it's Hadoop.  Solr 
includes hadoop dependencies so that it can store indexes in HDFS.  See 
SOLR-11763.

> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
> Attachments: solr-release.diff
>
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for 
> unit test run with guava 25.0-jre:
> 13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
> HdfsNNFailoverTest (suite)
> 13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
> HdfsNNFailoverTest (suite)
> 13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
> HdfsNNFailoverTest (suite)
> 13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
> HdfsNNFailoverTest (suite)
> 13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
> HdfsNNFailoverTest (suite)
> 13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
> HdfsNNFailoverTest (suite)
> Note: guava upgrade from default version 14.0.1 to 25.0-jre or 26.0-jre 
> requires solr code changes. The diff file (sole-release.diff) is attached 
> with this bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 23.0

2018-10-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650534#comment-16650534
 ] 

Shawn Heisey commented on SOLR-11763:
-

bq. Indeed and because of this we can't get rid of the dependency and move 
Solr's usage of Guava to Java8 

Even though we can't remove the dependency because it's required by other 
dependencies, I don't see any reason not to switch from Guava methods to native 
Java methods in code that we control.


> Upgrade Guava to 23.0
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch
>
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12867) Async request status: Not getting all status messages because a response from one node will overwrite previous responses from that node

2018-10-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650300#comment-16650300
 ] 

Shawn Heisey commented on SOLR-12867:
-

[~varunthacker], sounds like the correct thing to do is close this as a 
duplicate and add a note to SOLR-12291 saying that BACKUP is also affected.

> Async request status: Not getting all status messages because a response from 
> one node will overwrite previous responses from that node
> ---
>
> Key: SOLR-12867
> URL: https://issues.apache.org/jira/browse/SOLR-12867
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>Priority: Major
>
> Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.
> Not all of the responses from different nodes in the collection are being 
> reported.  According to [~shalinmangar], this is because multiple responses 
> from a node are overwriting earlier responses from that node.
> Steps to reproduce:
>  * Start a cloud example with "bin/solr -e cloud" in a 7.5.0 binary download. 
>  Tell it that you want 3 nodes, accept defaults for all other questions.
> * Create a collection with 30 shards:
> ** bin\solr create -c test2 -shards 30 -replicationFactor 2
> * Start an async backup of the collection.  On a Windows system, the URL 
> might look like this:
> ** 
> http://localhost:8983/solr/admin/collections?action=BACKUP&name=test2&collection=test2&location=C%3A%5CUsers%5Celyograg%5CDownloads%5Csolrbackups&async=sometag
>  * After a few seconds (to give the backup time to complete), request the 
> status of the async operation:
>  ** 
> http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=sometag
> The response will only contain individual statuses for 3 of the 30 shards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12867) Async request status: Not getting all status messages because a response from one node will overwrite previous responses from that node

2018-10-15 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12867:

Description: 
Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.

Not all of the responses from different nodes in the collection are being 
reported.  According to [~shalinmangar], this is because multiple responses 
from a node are overwriting earlier responses from that node.

Steps to reproduce:

 * Start a cloud example with "bin/solr -e cloud" in a 7.5.0 binary download.  
Tell it that you want 3 nodes, accept defaults for all other questions.
* Create a collection with 30 shards:
** bin\solr create -c test2 -shards 30 -replicationFactor 2
* Start an async backup of the collection.  On a Windows system, the URL might 
look like this:
** 
http://localhost:8983/solr/admin/collections?action=BACKUP&name=test2&collection=test2&location=C%3A%5CUsers%5Celyograg%5CDownloads%5Csolrbackups&async=sometag
 * After a few seconds (to give the backup time to complete), request the 
status of the async operation:
 ** 
http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=sometag

The response will only contain individual statuses for 3 of the 30 shards.


  was:
Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.

Not all of the responses from different nodes in the collection are being 
reported.  According to [~shalinmangar], this is because multiple responses 
from a node are overwriting earlier responses from that node.



> Async request status: Not getting all status messages because a response from 
> one node will overwrite previous responses from that node
> ---
>
> Key: SOLR-12867
> URL: https://issues.apache.org/jira/browse/SOLR-12867
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>Priority: Major
>
> Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.
> Not all of the responses from different nodes in the collection are being 
> reported.  According to [~shalinmangar], this is because multiple responses 
> from a node are overwriting earlier responses from that node.
> Steps to reproduce:
>  * Start a cloud example with "bin/solr -e cloud" in a 7.5.0 binary download. 
>  Tell it that you want 3 nodes, accept defaults for all other questions.
> * Create a collection with 30 shards:
> ** bin\solr create -c test2 -shards 30 -replicationFactor 2
> * Start an async backup of the collection.  On a Windows system, the URL 
> might look like this:
> ** 
> http://localhost:8983/solr/admin/collections?action=BACKUP&name=test2&collection=test2&location=C%3A%5CUsers%5Celyograg%5CDownloads%5Csolrbackups&async=sometag
>  * After a few seconds (to give the backup time to complete), request the 
> status of the async operation:
>  ** 
> http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=sometag
> The response will only contain individual statuses for 3 of the 30 shards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   4   5   6   7   8   9   10   >