[jira] [Commented] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-11-01 Thread Timo Hund (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624684#comment-15624684
 ] 

Timo Hund commented on SOLR-9616:
-

Joel Bernstein, yes as you described it happens with an empty index and seems 
to be introduced with 6.0

> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2.1
>Reporter: Timo Hund
>Priority: Critical
> Fix For: 6.2.1
>
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?&fq={!collapse 
> field=pid}&expand=true&expand.rows=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9433) SolrCore clean-up logic uses incorrect path to delete dataDir on failure to create a core

2016-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624850#comment-15624850
 ] 

ASF subversion and git services commented on SOLR-9433:
---

Commit da7ccd3eefc92943ac0cea5103c84530f77d67a4 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=da7ccd3 ]

SOLR-9433: SolrCore clean-up logic uses incorrect path to delete dataDir on 
failure to create a core

(cherry picked from commit 5120816)


> SolrCore clean-up logic uses incorrect path to delete dataDir on failure to 
> create a core
> -
>
> Key: SOLR-9433
> URL: https://issues.apache.org/jira/browse/SOLR-9433
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2
>Reporter: Evan Sayer
> Attachments: SOLR-9433.patch
>
>
> When a core fails to be created for some reason (errant schema or solrconfig 
> etc.), {{SolrCore.deleteUnloadedCore()}} is called from {{unload()}} in 
> CoreContainer in order to clean-up the possibly left-over {{dataDir}} and 
> {{instanceDir}}.  The problem is that the CoreDescriptor passed to 
> {{SolrCore.deleteUnloadedCore()}} will have its value for {{dataDir}} set to 
> just "data/" unless an explicit {{dataDir}} was specified by the user in the 
> request to create the core, leading to an attempt to delete simply 
> {{"data/"}}, which presumably resolves to a non-existent directory under 
> Solr's home directory or some such.
> https://github.com/apache/lucene-solr/blob/branch_5_5/solr/core/src/java/org/apache/solr/core/CoreContainer.java#L974
> https://github.com/apache/lucene-solr/blob/branch_5_5/solr/core/src/java/org/apache/solr/core/SolrCore.java#L2537



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9433) SolrCore clean-up logic uses incorrect path to delete dataDir on failure to create a core

2016-11-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9433.
-
   Resolution: Fixed
 Assignee: Shalin Shekhar Mangar
Fix Version/s: 6.4
   master (7.0)

Thanks Evan!

> SolrCore clean-up logic uses incorrect path to delete dataDir on failure to 
> create a core
> -
>
> Key: SOLR-9433
> URL: https://issues.apache.org/jira/browse/SOLR-9433
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2
>Reporter: Evan Sayer
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9433.patch
>
>
> When a core fails to be created for some reason (errant schema or solrconfig 
> etc.), {{SolrCore.deleteUnloadedCore()}} is called from {{unload()}} in 
> CoreContainer in order to clean-up the possibly left-over {{dataDir}} and 
> {{instanceDir}}.  The problem is that the CoreDescriptor passed to 
> {{SolrCore.deleteUnloadedCore()}} will have its value for {{dataDir}} set to 
> just "data/" unless an explicit {{dataDir}} was specified by the user in the 
> request to create the core, leading to an attempt to delete simply 
> {{"data/"}}, which presumably resolves to a non-existent directory under 
> Solr's home directory or some such.
> https://github.com/apache/lucene-solr/blob/branch_5_5/solr/core/src/java/org/apache/solr/core/CoreContainer.java#L974
> https://github.com/apache/lucene-solr/blob/branch_5_5/solr/core/src/java/org/apache/solr/core/SolrCore.java#L2537



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9433) SolrCore clean-up logic uses incorrect path to delete dataDir on failure to create a core

2016-11-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624855#comment-15624855
 ] 

Shalin Shekhar Mangar commented on SOLR-9433:
-

This is the corresponding commit on master: 
http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/51208163

I wrote "OLR-9433" instead of SOLR-9433 so the commit bot did not post the 
message to this issue.

> SolrCore clean-up logic uses incorrect path to delete dataDir on failure to 
> create a core
> -
>
> Key: SOLR-9433
> URL: https://issues.apache.org/jira/browse/SOLR-9433
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2
>Reporter: Evan Sayer
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9433.patch
>
>
> When a core fails to be created for some reason (errant schema or solrconfig 
> etc.), {{SolrCore.deleteUnloadedCore()}} is called from {{unload()}} in 
> CoreContainer in order to clean-up the possibly left-over {{dataDir}} and 
> {{instanceDir}}.  The problem is that the CoreDescriptor passed to 
> {{SolrCore.deleteUnloadedCore()}} will have its value for {{dataDir}} set to 
> just "data/" unless an explicit {{dataDir}} was specified by the user in the 
> request to create the core, leading to an attempt to delete simply 
> {{"data/"}}, which presumably resolves to a non-existent directory under 
> Solr's home directory or some such.
> https://github.com/apache/lucene-solr/blob/branch_5_5/solr/core/src/java/org/apache/solr/core/CoreContainer.java#L974
> https://github.com/apache/lucene-solr/blob/branch_5_5/solr/core/src/java/org/apache/solr/core/SolrCore.java#L2537



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3642 - Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3642/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth

Error Message:
Invalid jsonError 401   
 HTTP ERROR: 401 Problem accessing 
/solr/admin/authentication. Reason: Bad credentials http://eclipse.org/jetty";>Powered by Jetty:// 9.3.8.v20160314 
  

Stack Trace:
java.lang.AssertionError: Invalid json 


Error 401 


HTTP ERROR: 401
Problem accessing /solr/admin/authentication. Reason:
Bad credentials
http://eclipse.org/jetty";>Powered by Jetty:// 
9.3.8.v20160314



at 
__randomizedtesting.SeedInfo.seed([C9D5ED88702AE5D9:75BB9B9AD47966A3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:256)
at 
org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:237)
at 
org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth(BasicAuthStandaloneTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.Statem

[jira] [Created] (SOLR-9709) update http://wiki.apache.org/solr/SolJSON 'JSON specific parameters' section

2016-11-01 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-9709:
-

 Summary: update http://wiki.apache.org/solr/SolJSON 'JSON specific 
parameters' section
 Key: SOLR-9709
 URL: https://issues.apache.org/jira/browse/SOLR-9709
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke
Priority: Minor


Currently http://wiki.apache.org/solr/SolJSON#JSON_specific_parameters documents
* json.nl=flat
* json.nl=map
* json.nl=arrarr

but choices

* json.nl=arrmap
* json.nl=arrnvp

are not documented.

This ticket is to document {{json.nl=arrnvp}} added by SOLR-9442 and also 
{{json.nl=arrmap}} which already exists.

link to relevant code: 
[JSONResponseWriter.java#L85-L89|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/response/JSONResponseWriter.java#L85-L89]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9442) Add json.nl=arrnvp (array of NamedValuePair) style in JSONResponseWriter

2016-11-01 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625012#comment-15625012
 ] 

Christine Poerschke commented on SOLR-9442:
---

Patch committed to master and cherry-picked to branch_6x. Created SOLR-9709 
sub-task for wiki update.

> Add json.nl=arrnvp (array of NamedValuePair) style in JSONResponseWriter
> 
>
> Key: SOLR-9442
> URL: https://issues.apache.org/jira/browse/SOLR-9442
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Reporter: Jonny Marks
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9442.patch, SOLR-9442.patch, SOLR-9442.patch
>
>
> The JSONResponseWriter class currently supports several styles of NamedList 
> output format, documented on the wiki at http://wiki.apache.org/solr/SolJSON 
> and in the code at 
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/response/JSONResponseWriter.java#L71-L76.
> For example the 'arrmap' style:
> {code}NamedList("a"=1,"b"=2,null=3) => [{"a":1},{"b":2},3]
> NamedList("a"=1,"bar”=“foo",null=3.4f) => [{"a":1},{"bar”:”foo"},{3.4}]{code}
> This patch creates a new style ‘arrnvp’ which is an array of named value 
> pairs. For example:
> {code}NamedList("a"=1,"b"=2,null=3) => 
> [{"name":"a","int":1},{"name":"b","int":2},{"int":3}]
> NamedList("a"=1,"bar”=“foo",null=3.4f) => 
> [{"name":"a","int":1},{"name":"b","str":"foo"},{"float":3.4}]{code}
> This style maintains the type information of the values, similar to the xml 
> format:
> {code:xml}
>   1
>   foo
>   3.4
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9609) Change hard-coded keysize from 512 to 1024

2016-11-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625167#comment-15625167
 ] 

Shalin Shekhar Mangar commented on SOLR-9609:
-

This should go into security.json. It is a system-wide setting. Please don't 
force people to go edit solr.in.sh files to set this property across their 
clusters.

> Change hard-coded keysize from 512 to 1024
> --
>
> Key: SOLR-9609
> URL: https://issues.apache.org/jira/browse/SOLR-9609
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jeremy Martini
>Assignee: Erick Erickson
> Attachments: SOLR-9609.patch, SOLR-9609.patch, solr.log
>
>
> In order to configure our dataSource without requiring a plaintext password 
> in the configuration file, we extended JdbcDataSource to create our own 
> custom implementation. Our dataSource config now looks something like this:
> {code:xml}
>  url="jdbc:oracle:thin:@db-host-machine:1521:tst1" user="testuser" 
> password="{ENC}{1.1}1ePOfWcbOIU056gKiLTrLw=="/>
> {code}
> We are using the RSA JSAFE Crypto-J libraries for encrypting/decrypting the 
> password. However, this seems to cause an issue when we try use Solr in a 
> Cloud Configuration (using Zookeeper). The error is "Strong key gen and 
> multiprime gen require at least 1024-bit keysize." Full log attached.
> This seems to be due to the hard-coded value of 512 in the 
> org.apache.solr.util.CryptoKeys$RSAKeyPair class:
> {code:java}
> public RSAKeyPair() {
>   KeyPairGenerator keyGen = null;
>   try {
> keyGen = KeyPairGenerator.getInstance("RSA");
>   } catch (NoSuchAlgorithmException e) {
> throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
>   }
>   keyGen.initialize(512);
> {code}
> I pulled down the Solr code, changed the hard-coded value to 1024, rebuilt 
> it, and now everything seems to work great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625191#comment-15625191
 ] 

ASF subversion and git services commented on SOLR-9481:
---

Commit 22aa34e017bec1c8e8fd517e2969b1311c545c25 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=22aa34e ]

SOLR-9481: Move changes entry to 6.4


> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625194#comment-15625194
 ] 

Jan Høydahl commented on SOLR-9481:
---

Sure, was just waiting for RM to create version 6.4

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9481:
--
Fix Version/s: (was: 6.x)
   6.4

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9661) Explain of select that uses replace() throws exception

2016-11-01 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625220#comment-15625220
 ] 

Dennis Gove commented on SOLR-9661:
---

I've thrown together a simple test that shows this exception.

{code}
@Test
public void replaceToExplanation() throws Exception{
  StreamFactory factory = new StreamFactory().withFunctionName("replace", 
ReplaceOperation.class);
  StreamOperation operation = new ReplaceOperation("fieldA", 
StreamExpressionParser.parse("replace(null, withValue=foo)"), factory);

  StreamExpressionParameter expression = operation.toExpression(factory);
  Explanation explanation = operation.toExplanation(factory);
}
{code}

Obviously, the toExplanation(...) line throws an exception but so does the 
toExpression(...) line. I'm not sure why this hasn't come up before as 
toExpression is used extensively, particularly by the parallel stream.

> Explain of select that uses replace() throws exception
> --
>
> Key: SOLR-9661
> URL: https://issues.apache.org/jira/browse/SOLR-9661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Gus Heck
>
> {code}
> select(
>search(test, q="table:article ", fl="edge_id", sort="edge_id desc", 
> rows=10),
>edge_id,
>replace(type,null, withValue="1")
> )
> {code}
> as a streaming expression produced this stack trace:
> {code}
> ERROR (qtp1989972246-17) [c:test s:shard1 r:core_node1 
> x:test_shard1_replica1] o.a.s.s.HttpSolrCall null:java.io.IOException: Unable 
> to find function name for class 
> 'org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation'
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.getFunctionName(StreamFactory.java:335)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation.toExpression(ReplaceWithValueOperation.java:108)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceOperation.toExpression(ReplaceOperation.java:81)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExpression(SelectStream.java:148)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:164)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.MergeStream.toExplanation(MergeStream.java:136)
>   at 
> org.apache.solr.client.solrj.io.stream.HashJoinStream.toExplanation(HashJoinStream.java:174)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:205)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 509 - Still Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/509/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:61867","node_name":"127.0.0.1:61867_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:61879";,   
"core":"c8n_1x3_lf_shard1_replica2",   "node_name":"127.0.0.1:61879_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:61867";,   "node_name":"127.0.0.1:61867_",  
 "state":"active",   "leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:61889";,   "node_name":"127.0.0.1:61889_",  
 "state":"down",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:61867","node_name":"127.0.0.1:61867_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:61879";,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:61879_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:61867";,
  "node_name":"127.0.0.1:61867_",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:61889";,
  "node_name":"127.0.0.1:61889_",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3F9694073ED7F6AA:B7C2ABDD902B9B52]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java

[jira] [Commented] (LUCENE-7531) Remove packing support from FST

2016-11-01 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625244#comment-15625244
 ] 

Dawid Weiss commented on LUCENE-7531:
-

Well, it's sad to see the stuff I came up with (and Mike implemented) go 
away... :) But more seriously -- this does seem to impact large automata. Can 
you recode the existing automata and see how much we lose by removing packing? 
Looking at the patch target addresses are still vint-encoded; if I recall right 
the compression ratio gained by packing was significant (compared to baseline 
fst), but a small fraction of overall input size. So a fst gain of a few 
megabytes for data size that is several hundred megabytes is indeed worth 
cutting the additional complexity of fst construction.

+1 to remove it, but some stats on dictionary sizes before/after would be nice.

> Remove packing support from FST
> ---
>
> Key: LUCENE-7531
> URL: https://issues.apache.org/jira/browse/LUCENE-7531
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7531.patch
>
>
> This seems to be only used for the kuromoji dictionaries, but we could easily 
> rebuild those dictionaries with packing disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9661) Explain of select that uses replace() throws exception

2016-11-01 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625245#comment-15625245
 ] 

Dennis Gove commented on SOLR-9661:
---

I can't think of a clever, generic way to handle this case. 
{code}ReplaceWithFieldOperation{code} 
and 
{code}ReplaceWithValueOperation{code} 
are both aliased by the 
{code}
ReplaceOperation
{code}
class.

Within either of those classes there's no simple way to know which function 
name is assigned. There are a couple of ways to do it but they feel rather 
hokey. I'm going to give it some thought before deciding on a course of action.

> Explain of select that uses replace() throws exception
> --
>
> Key: SOLR-9661
> URL: https://issues.apache.org/jira/browse/SOLR-9661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Gus Heck
>
> {code}
> select(
>search(test, q="table:article ", fl="edge_id", sort="edge_id desc", 
> rows=10),
>edge_id,
>replace(type,null, withValue="1")
> )
> {code}
> as a streaming expression produced this stack trace:
> {code}
> ERROR (qtp1989972246-17) [c:test s:shard1 r:core_node1 
> x:test_shard1_replica1] o.a.s.s.HttpSolrCall null:java.io.IOException: Unable 
> to find function name for class 
> 'org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation'
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.getFunctionName(StreamFactory.java:335)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation.toExpression(ReplaceWithValueOperation.java:108)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceOperation.toExpression(ReplaceOperation.java:81)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExpression(SelectStream.java:148)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:164)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.MergeStream.toExplanation(MergeStream.java:136)
>   at 
> org.apache.solr.client.solrj.io.stream.HashJoinStream.toExplanation(HashJoinStream.java:174)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:205)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.jav

[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625257#comment-15625257
 ] 

Alan Woodward commented on SOLR-9481:
-

I think the test failures may be due to auth credentials being set on the 
HttpClientConfigurer in previous tests persisting?  You can insert my usual 
rant about using global state here :)

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.3.0

2016-11-01 Thread Jan Høydahl
Solr:
  Added own Logging section, moved some security related changes to Security 
section, Shortened some long descriptions…

Several lines in the file still exceeds 120 characters, could we rewrite to 
make the list more pleasing to read?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 1. nov. 2016 kl. 06.23 skrev Shalin Shekhar Mangar :
> 
> I have created release notes for Lucene and Solr. Please edit to
> improve as you see fit.
> 
> Lucene: https://wiki.apache.org/lucene-java/ReleaseNote63
> Solr: https://wiki.apache.org/solr/ReleaseNote63
> 
> On Wed, Oct 26, 2016 at 2:16 PM, Shalin Shekhar Mangar
>  wrote:
>> It looks like 6.3.0 has accumulated enough new features, optimizations and
>> fixes. How do folks feel about pushing this release out?
>> 
>> I volunteer to be the RM. If there are no objections, I'd like to put the
>> first RC to vote on Monday.
>> 
>> --
>> Regards,
>> Shalin Shekhar Mangar.
> 
> 
> 
> -- 
> Regards,
> Shalin Shekhar Mangar.
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Commented] (SOLR-9706) fetchIndex blocks incoming queries when issued on a replica in SolrCloud

2016-11-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625330#comment-15625330
 ] 

Shalin Shekhar Mangar commented on SOLR-9706:
-

Interesting. Does this affect non-cloud master/slave setups as well?

> fetchIndex blocks incoming queries when issued on a replica in SolrCloud
> 
>
> Key: SOLR-9706
> URL: https://issues.apache.org/jira/browse/SOLR-9706
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>
> This is something of an edge case, but it's perfectly possible to issue a 
> fetchIndex command through the core admin API to a replica in SolrCloud. 
> While the fetch is going on, incoming queries are blocked. Then when the 
> fetch completes, all the queued-up queries execute.
> In the normal case, this is probably the proper behavior as a fetchIndex 
> during "normal" SolrCloud operation indicates that the replica's index is too 
> far out of date and _shouldn't_ serve queries, this is a special case.
> Why would one want to do this? Well, in _extremely_ high indexing throughput 
> situations, the additional time taken for the leader forwarding the query on 
> to a follower is too high. So there is an indexing cluster and a search 
> cluster and an external process that issues a fetchIndex to each replica in 
> the search cluster periodiclally.
> What do people think about an "expert" option for fetchIndex that would cause 
> a replica to behave like the old master/slave days and continue serving 
> queries while the fetchindex was going on? Or another solution?
> FWIW, here's the stack traces where the blocking is going on (6.3 about). 
> This is not hard to reproduce if you introduce an artificial delay in the 
> fetch command then submit a fetchIndex and try to query.
> Blocked query thread(s)
> DefaultSolrCoreState.loci(159)
> DefaultSolrCoreState.getIndexWriter (104)
> SolrCore.openNewSearcher(1781)
> SolrCore.getSearcher(1931)
> SolrCore.getSearchers(1677)
> SolrCore.getSearcher(1577)
> SolrQueryRequestBase.getSearcher(115)
> QueryComponent.process(308).
> The stack trace that releases this is
> DefaultSolrCoreState.createMainIndexWriter(240)
> DefaultSolrCoreState.changeWriter(203)
> DefaultSolrCoreState.openIndexWriter(228) // LOCK RELEASED 2 lines later
> IndexFetcher.fetchLatestIndex(493) (approx, I have debugging code in there. 
> It's in the "finally" clause anyway.)
> IndexFetcher.fetchLatestIndex(251).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625371#comment-15625371
 ] 

Jan Høydahl commented on SOLR-9481:
---

Has to be some global state or thread issues I figure. Are you suggesting that 
some username/password is entered by another test and then we use wrong 
credentials for this?

The extra DEBUG logging revealed something interesting too. In a passing test 
run we see these lines:
{noformat}
   [junit4]   2> 3904 DEBUG (qtp1698836513-21) [] 
o.a.s.s.SolrDispatchFilter Request to authenticate: Request(GET 
//127.0.0.1:58765/solr/admin/authentication)@b521fd6, domain: 127.0.0.1, port: 
58765
   [junit4]   2> 3908 DEBUG (qtp1698836513-21) [] 
o.a.s.s.SolrDispatchFilter User principal: null
   [junit4]   2> 3908 DEBUG (qtp1698836513-21) [] o.a.s.s.HttpSolrCall 
AuthorizationContext : userPrincipal: [null] type: [ADMIN], collections: [], 
Path: [/admin/authentication] path : /admin/authentication params :
   [junit4]   2> 3911 DEBUG (qtp1698836513-21) [] 
o.a.s.s.RuleBasedAuthorizationPlugin No permissions configured for the resource 
/admin/authentication . So allowed to access
   [junit4]   2> 3912 INFO  (qtp1698836513-21) [] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/authentication params={} status=0 QTime=3
   [junit4]   2> 3912 DEBUG (qtp1698836513-21) [] o.a.s.s.HttpSolrCall 
Closing out SolrRequest: {{params(),defaults(wt=json&indent=true)}}
[...many more lines...]
   [junit4]   2> 4005 INFO  
(TEST-BasicAuthStandaloneTest.testBasicAuth-seed#[3F9AB8AA5B5A65E6]) [] 
o.e.j.s.ServerConnector Stopped 
ServerConnector@117a0348{HTTP/1.1,[http/1.1]}{127.0.0.1:0}
{noformat}

But in the failing test we see no logs about checking userPrincipal at all:
{noformat}
  [junit4]   2> 2497828 DEBUG (qtp1492928552-46458) [] 
o.a.s.s.SolrDispatchFilter Request to authenticate: Request(GET 
https://127.0.0.1:64493/solr/admin/authentication)@7a1251ec, domain: 127.0.0.1, 
port: 64493
  [junit4]   2> 2497830 INFO  
(TEST-BasicAuthStandaloneTest.testBasicAuth-seed#[89F7DD5B01C6CD5E]) [] 
o.e.j.s.ServerConnector Stopped ServerConnector@66783857{SSL,[ssl, 
http/1.1]}{127.0.0.1:0}
{noformat}

I'm not sure why. The log {{o.a.s.s.SolrDispatchFilter User principal: null}} 
is always printed if {{cores.getAuthenticationPlugin() != null}}... Too little 
log statements in this part of the code...

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625403#comment-15625403
 ] 

Alan Woodward commented on SOLR-9481:
-

bq. Are you suggesting that some username/password is entered by another test 
and then we use wrong credentials for this?

Yes; if you look in BasicAuthPlugin.doAuthenticate(), you'll see that the 'Bad 
Credentials' error is only thrown if the passed in request has an 
'Authorization' header.  So a previous test must be adding in an auth header 
setting to the HttpClientConfigurer, and this is still there when we get to 
this test.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9709) update http://wiki.apache.org/solr/SolJSON 'JSON specific parameters' section

2016-11-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625411#comment-15625411
 ] 

Cassandra Targett commented on SOLR-9709:
-

The JSON response writer is lightly documented in the Solr Ref Guide 
(https://cwiki.apache.org/confluence/display/solr/Response+Writers#ResponseWriters-JSONResponseWriter).
 It's missing the parameters and some other information, which has been on the 
Ref Guide TODO list for a very long time.

Adding the new params to the Solr Wiki perpetuates the problem of documentation 
in 2 places. I suggest that a more complete resolution to this is to migrate 
the missing content from the Solr Wiki to the Ref Guide and turn the Solr Wiki 
page into a stub as outlined here: 
https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation#Internal-MaintainingDocumentation-Migrating"Official"DocumentationfromMoinMoin.
 That would help with the overall migration process, which has been stalled for 
a long time but is still a major issue with documentation.

> update http://wiki.apache.org/solr/SolJSON 'JSON specific parameters' section
> -
>
> Key: SOLR-9709
> URL: https://issues.apache.org/jira/browse/SOLR-9709
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Reporter: Christine Poerschke
>Priority: Minor
>
> Currently http://wiki.apache.org/solr/SolJSON#JSON_specific_parameters 
> documents
> * json.nl=flat
> * json.nl=map
> * json.nl=arrarr
> but choices
> * json.nl=arrmap
> * json.nl=arrnvp
> are not documented.
> This ticket is to document {{json.nl=arrnvp}} added by SOLR-9442 and also 
> {{json.nl=arrmap}} which already exists.
> link to relevant code: 
> [JSONResponseWriter.java#L85-L89|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/response/JSONResponseWriter.java#L85-L89]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1144 - Still Unstable

2016-11-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1144/

8 tests failed.
FAILED:  org.apache.solr.cloud.TestRandomRequestDistribution.test

Error Message:
Shard a1x2_shard1_replica2 received all 10 requests

Stack Trace:
java.lang.AssertionError: Shard a1x2_shard1_replica2 received all 10 requests
at 
__randomizedtesting.SeedInfo.seed([BBD2D1049B4F39BF:3386EEDE35B35447]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking(TestRandomRequestDistribution.java:122)
at 
org.apache.solr.cloud.TestRandomRequestDistribution.test(TestRandomRequestDistribution.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeak

[jira] [Commented] (LUCENE-7531) Remove packing support from FST

2016-11-01 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625584#comment-15625584
 ] 

Adrien Grand commented on LUCENE-7531:
--

The thing that made me open that issue is not really that I think packing is 
not a good idea: actually I have no idea. But I'd like to clean up PackedInts a 
bit, and FST packing is one user of this API. And I noticed there is nothing in 
the code base that enables packing on FSTs, except the kuromoji dictionaries, 
which are _smaller_ with packing disabled than with packing enabled.

> Remove packing support from FST
> ---
>
> Key: LUCENE-7531
> URL: https://issues.apache.org/jira/browse/LUCENE-7531
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7531.patch
>
>
> This seems to be only used for the kuromoji dictionaries, but we could easily 
> rebuild those dictionaries with packing disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625588#comment-15625588
 ] 

ASF subversion and git services commented on SOLR-9481:
---

Commit 4383bec84c38464c60e63880ad0ba37128d261a3 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4383bec ]

SOLR-9481: Clearing existing global interceptors on HttpClientUtil to avoid 
user/pass leaks from other tests


> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625601#comment-15625601
 ] 

Jan Høydahl commented on SOLR-9481:
---

Ok, I'm attempting a blind fix by clearing interceptors on HttpClientUtil in 
{{@Before}} before creating the client I use for the test. So if it runs 
without new test failures for a few days I guess it worked... This global state 
stuff surely looks shaky.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 486 - Still Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/486/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor169.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:706)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:768)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1007)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:872)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:776)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor169.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:706)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:768)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1007)
at org.apache.solr.core.SolrCore.(SolrCore.java:872)
at org.apache.solr.core.SolrCore.(SolrCore.java:776)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([3FFC99BF5C01CF22]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:260)
at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.State

[jira] [Commented] (SOLR-9706) fetchIndex blocks incoming queries when issued on a replica in SolrCloud

2016-11-01 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625624#comment-15625624
 ] 

Erick Erickson commented on SOLR-9706:
--

M/S replication does not show this behavior. That's why I wondered if it's 
deliberate or just an accident of coding.

Given that it only happens in SolrCloud, and the node should be in recovery and 
thus not receive any queries, if it's accidental then it would go unnoticed. 
One could even argue that this is correct in the "normal" case.

This scenario is one in which an explicit fetchindex is submitted while the 
search cluster is actively serving queries, thus something of an edge case.

The idea of passing a parameter to override this behavior assumes that it's 
deliberate. If changing the code such that _explicit_ fetchindex commands don't 
block that would be fine too.

> fetchIndex blocks incoming queries when issued on a replica in SolrCloud
> 
>
> Key: SOLR-9706
> URL: https://issues.apache.org/jira/browse/SOLR-9706
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>
> This is something of an edge case, but it's perfectly possible to issue a 
> fetchIndex command through the core admin API to a replica in SolrCloud. 
> While the fetch is going on, incoming queries are blocked. Then when the 
> fetch completes, all the queued-up queries execute.
> In the normal case, this is probably the proper behavior as a fetchIndex 
> during "normal" SolrCloud operation indicates that the replica's index is too 
> far out of date and _shouldn't_ serve queries, this is a special case.
> Why would one want to do this? Well, in _extremely_ high indexing throughput 
> situations, the additional time taken for the leader forwarding the query on 
> to a follower is too high. So there is an indexing cluster and a search 
> cluster and an external process that issues a fetchIndex to each replica in 
> the search cluster periodiclally.
> What do people think about an "expert" option for fetchIndex that would cause 
> a replica to behave like the old master/slave days and continue serving 
> queries while the fetchindex was going on? Or another solution?
> FWIW, here's the stack traces where the blocking is going on (6.3 about). 
> This is not hard to reproduce if you introduce an artificial delay in the 
> fetch command then submit a fetchIndex and try to query.
> Blocked query thread(s)
> DefaultSolrCoreState.loci(159)
> DefaultSolrCoreState.getIndexWriter (104)
> SolrCore.openNewSearcher(1781)
> SolrCore.getSearcher(1931)
> SolrCore.getSearchers(1677)
> SolrCore.getSearcher(1577)
> SolrQueryRequestBase.getSearcher(115)
> QueryComponent.process(308).
> The stack trace that releases this is
> DefaultSolrCoreState.createMainIndexWriter(240)
> DefaultSolrCoreState.changeWriter(203)
> DefaultSolrCoreState.openIndexWriter(228) // LOCK RELEASED 2 lines later
> IndexFetcher.fetchLatestIndex(493) (approx, I have debugging code in there. 
> It's in the "finally" clause anyway.)
> IndexFetcher.fetchLatestIndex(251).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9706) fetchIndex blocks incoming queries when issued on a replica in SolrCloud

2016-11-01 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625624#comment-15625624
 ] 

Erick Erickson edited comment on SOLR-9706 at 11/1/16 2:57 PM:
---

M/S replication does not show this behavior. That's why I wondered if it's 
deliberate or just an accident of coding.

Given that it only happens in SolrCloud, and the node should be in recovery 
when the logic for a fetchindex kicks in and thus not receive any queries, if 
it's accidental then it could easily have been there from day 1. One could even 
argue that this is correct in the "normal" case.

This scenario is one in which an explicit fetchindex is submitted while the 
search cluster is actively serving queries, thus something of an edge case.

The idea of passing a parameter to override this behavior assumes that it's 
deliberate. If changing the code such that _explicit_ fetchindex commands in 
cloud mode don't block incoming queries that would be fine too.


was (Author: erickerickson):
M/S replication does not show this behavior. That's why I wondered if it's 
deliberate or just an accident of coding.

Given that it only happens in SolrCloud, and the node should be in recovery and 
thus not receive any queries, if it's accidental then it would go unnoticed. 
One could even argue that this is correct in the "normal" case.

This scenario is one in which an explicit fetchindex is submitted while the 
search cluster is actively serving queries, thus something of an edge case.

The idea of passing a parameter to override this behavior assumes that it's 
deliberate. If changing the code such that _explicit_ fetchindex commands don't 
block that would be fine too.

> fetchIndex blocks incoming queries when issued on a replica in SolrCloud
> 
>
> Key: SOLR-9706
> URL: https://issues.apache.org/jira/browse/SOLR-9706
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>
> This is something of an edge case, but it's perfectly possible to issue a 
> fetchIndex command through the core admin API to a replica in SolrCloud. 
> While the fetch is going on, incoming queries are blocked. Then when the 
> fetch completes, all the queued-up queries execute.
> In the normal case, this is probably the proper behavior as a fetchIndex 
> during "normal" SolrCloud operation indicates that the replica's index is too 
> far out of date and _shouldn't_ serve queries, this is a special case.
> Why would one want to do this? Well, in _extremely_ high indexing throughput 
> situations, the additional time taken for the leader forwarding the query on 
> to a follower is too high. So there is an indexing cluster and a search 
> cluster and an external process that issues a fetchIndex to each replica in 
> the search cluster periodiclally.
> What do people think about an "expert" option for fetchIndex that would cause 
> a replica to behave like the old master/slave days and continue serving 
> queries while the fetchindex was going on? Or another solution?
> FWIW, here's the stack traces where the blocking is going on (6.3 about). 
> This is not hard to reproduce if you introduce an artificial delay in the 
> fetch command then submit a fetchIndex and try to query.
> Blocked query thread(s)
> DefaultSolrCoreState.loci(159)
> DefaultSolrCoreState.getIndexWriter (104)
> SolrCore.openNewSearcher(1781)
> SolrCore.getSearcher(1931)
> SolrCore.getSearchers(1677)
> SolrCore.getSearcher(1577)
> SolrQueryRequestBase.getSearcher(115)
> QueryComponent.process(308).
> The stack trace that releases this is
> DefaultSolrCoreState.createMainIndexWriter(240)
> DefaultSolrCoreState.changeWriter(203)
> DefaultSolrCoreState.openIndexWriter(228) // LOCK RELEASED 2 lines later
> IndexFetcher.fetchLatestIndex(493) (approx, I have debugging code in there. 
> It's in the "finally" clause anyway.)
> IndexFetcher.fetchLatestIndex(251).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.3.0 RC2

2016-11-01 Thread Kevin Risden
+1

SUCCESS! [0:57:07.005632]

Kevin Risden

On Mon, Oct 31, 2016 at 2:26 PM, Steve Rowe  wrote:

> +1
>
> Smoke tester is happy: SUCCESS! [0:24:53.709200]
>
> Docs, changes and javadocs look good.
>
> --
> Steve
> www.lucidworks.com
>
> > On Oct 31, 2016, at 2:28 PM, Shalin Shekhar Mangar 
> wrote:
> >
> > Please vote for the second release candidate for Lucene/Solr 6.3.0
> >
> > The artifacts can be downloaded from:
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.3.0-RC2-
> rev1fe1a54db32b8c27bfae81887cd4d75242090613/
> >
> > You can run the smoke tester directly with this command:
> > python3 -u dev-tools/scripts/smokeTestRelease.py
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.3.0-RC2-
> rev1fe1a54db32b8c27bfae81887cd4d75242090613/
> >
> > Smoke tester passed for me:
> > SUCCESS! [0:35:05.847870]
> >
> > Here's my +1 to release.
> >
> > --
> > Regards,
> > Shalin Shekhar Mangar.
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625778#comment-15625778
 ] 

Shawn Heisey commented on SOLR-9481:


That's kinda scary.

Here's what came to mind before looking at the code: Thread-safe code becomes 
very difficult when there are static class variables that affect client 
building.

After looking at the code, it's a similar thought, but the situation is worse 
than I imagined: Every HttpClient built by our util class in a program will 
share the same global list of interceptors (whatever those do).  Which means 
that if you set up the interceptors the way you want for one server, create the 
HttpClient and the SolrClient, then clear the interceptors and set up a second 
HttpClient/SolrClient, BOTH clients will use the new list.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9681) add filter to any facet

2016-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625803#comment-15625803
 ] 

ASF subversion and git services commented on SOLR-9681:
---

Commit 359f981b0e2737c3d019d0097e5be3bf76874407 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=359f981 ]

SOLR-9681: move "filter" inside "domain" block


> add filter to any facet
> ---
>
> Key: SOLR-9681
> URL: https://issues.apache.org/jira/browse/SOLR-9681
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9681.patch, SOLR-9681.patch
>
>
> For the JSON Facet API, we should be able to add a list of filters to any 
> facet.  These would be applied after any domain changes, hence useful for 
> parent->child mapping that would otherwise match all children of any parent 
> (SOLR-9510)
> The API should also be consistent with "filter" at the top level of the JSON 
> Request API (examples at http://yonik.com/solr-json-request-api/ )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9681) add filter to any facet

2016-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625857#comment-15625857
 ] 

ASF subversion and git services commented on SOLR-9681:
---

Commit 8c42045f2781e44b22bf9ac8faca0b32346e5cc3 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8c42045 ]

SOLR-9681: move "filter" inside "domain" block


> add filter to any facet
> ---
>
> Key: SOLR-9681
> URL: https://issues.apache.org/jira/browse/SOLR-9681
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9681.patch, SOLR-9681.patch
>
>
> For the JSON Facet API, we should be able to add a list of filters to any 
> facet.  These would be applied after any domain changes, hence useful for 
> parent->child mapping that would otherwise match all children of any parent 
> (SOLR-9510)
> The API should also be consistent with "filter" at the top level of the JSON 
> Request API (examples at http://yonik.com/solr-json-request-api/ )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9681) add filter to any facet

2016-11-01 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-9681.

Resolution: Fixed

Change is in, thanks for the input!

> add filter to any facet
> ---
>
> Key: SOLR-9681
> URL: https://issues.apache.org/jira/browse/SOLR-9681
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9681.patch, SOLR-9681.patch
>
>
> For the JSON Facet API, we should be able to add a list of filters to any 
> facet.  These would be applied after any domain changes, hence useful for 
> parent->child mapping that would otherwise match all children of any parent 
> (SOLR-9510)
> The API should also be consistent with "filter" at the top level of the JSON 
> Request API (examples at http://yonik.com/solr-json-request-api/ )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9709) update http://wiki.apache.org/solr/SolJSON 'JSON specific parameters' section

2016-11-01 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625884#comment-15625884
 ] 

Christine Poerschke commented on SOLR-9709:
---

Thanks Cassandra for the pointers into the cwiki pages. Somehow I was under the 
(mistaken) impression that JSON response writer is not at all in the Solr Ref 
Guide. But since it is lightly present then yes it totally makes sense to 
extend that documentation instead of updating the Solr Wiki itself.

> update http://wiki.apache.org/solr/SolJSON 'JSON specific parameters' section
> -
>
> Key: SOLR-9709
> URL: https://issues.apache.org/jira/browse/SOLR-9709
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Reporter: Christine Poerschke
>Priority: Minor
>
> Currently http://wiki.apache.org/solr/SolJSON#JSON_specific_parameters 
> documents
> * json.nl=flat
> * json.nl=map
> * json.nl=arrarr
> but choices
> * json.nl=arrmap
> * json.nl=arrnvp
> are not documented.
> This ticket is to document {{json.nl=arrnvp}} added by SOLR-9442 and also 
> {{json.nl=arrmap}} which already exists.
> link to relevant code: 
> [JSONResponseWriter.java#L85-L89|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/response/JSONResponseWriter.java#L85-L89]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #104: SOLR-8593 - WIP

2016-11-01 Thread risdenk
Github user risdenk commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/104#discussion_r85965892
  
--- Diff: lucene/default-nested-ivy-settings.xml ---
@@ -32,6 +32,7 @@
   
 
   
+https://repository.apache.org/content/repositories/snapshots"; 
m2compatible="true" />
--- End diff --

Avatica 1.9 released and PR updated. Waiting on Calcite 1.11.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9709) move http://wiki.apache.org/solr/SolJSON ('JSON specific parameters' +) to Solr Ref Guide

2016-11-01 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9709:
--
Summary: move http://wiki.apache.org/solr/SolJSON ('JSON specific 
parameters' +) to Solr Ref Guide  (was: update 
http://wiki.apache.org/solr/SolJSON 'JSON specific parameters' section)

> move http://wiki.apache.org/solr/SolJSON ('JSON specific parameters' +) to 
> Solr Ref Guide
> -
>
> Key: SOLR-9709
> URL: https://issues.apache.org/jira/browse/SOLR-9709
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Reporter: Christine Poerschke
>Priority: Minor
>
> Currently http://wiki.apache.org/solr/SolJSON#JSON_specific_parameters 
> documents
> * json.nl=flat
> * json.nl=map
> * json.nl=arrarr
> but choices
> * json.nl=arrmap
> * json.nl=arrnvp
> are not documented.
> This ticket is to document {{json.nl=arrnvp}} added by SOLR-9442 and also 
> {{json.nl=arrmap}} which already exists.
> link to relevant code: 
> [JSONResponseWriter.java#L85-L89|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/response/JSONResponseWriter.java#L85-L89]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625888#comment-15625888
 ] 

ASF GitHub Bot commented on SOLR-8593:
--

Github user risdenk commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/104#discussion_r85965892
  
--- Diff: lucene/default-nested-ivy-settings.xml ---
@@ -32,6 +32,7 @@
   
 
   
+https://repository.apache.org/content/repositories/snapshots"; 
m2compatible="true" />
--- End diff --

Avatica 1.9 released and PR updated. Waiting on Calcite 1.11.


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18192 - Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18192/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_3A515235F64870D3-001/solr-instance-026/./collection1/data,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_3A515235F64870D3-001/solr-instance-026/./collection1/data/index.20161101121248588,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_3A515235F64870D3-001/solr-instance-026/./collection1/data/index.20161101121248499,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_3A515235F64870D3-001/solr-instance-026/./collection1/data/snapshot_metadata]
 expected:<3> but was:<4>

Stack Trace:
java.lang.AssertionError: 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_3A515235F64870D3-001/solr-instance-026/./collection1/data,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_3A515235F64870D3-001/solr-instance-026/./collection1/data/index.20161101121248588,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_3A515235F64870D3-001/solr-instance-026/./collection1/data/index.20161101121248499,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_3A515235F64870D3-001/solr-instance-026/./collection1/data/snapshot_metadata]
 expected:<3> but was:<4>
at 
__randomizedtesting.SeedInfo.seed([3A515235F64870D3:CD22BC6D30A0DF35]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:902)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(Abs

[jira] [Commented] (LUCENE-7531) Remove packing support from FST

2016-11-01 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625990#comment-15625990
 ] 

Dawid Weiss commented on LUCENE-7531:
-

The code may not call for packed dictionaries, but there may be existing 
dictionaries that are already packed. Anyway, I don't think this is that 
crucial, really. 

bq. except the kuromoji dictionaries, which are smaller with packing disabled 
than with packing enabled.

This should never happen, something is wrong. The way automata compression was 
implemented in Morfologik would nearly always decrease the size of the 
automaton, especially in the first few optimization/ reshuffling steps. [1].

[1] http://www.cs.put.poznan.pl/dweiss/site/publications/download/fsacomp.pdf

> Remove packing support from FST
> ---
>
> Key: LUCENE-7531
> URL: https://issues.apache.org/jira/browse/LUCENE-7531
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7531.patch
>
>
> This seems to be only used for the kuromoji dictionaries, but we could easily 
> rebuild those dictionaries with packing disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6219 - Still Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6219/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.admin.CoreAdminHandlerTest.testDeleteInstanceDirAfterCreateFailure

Error Message:
The data directory was not cleaned up on unload after a failed core reload

Stack Trace:
java.lang.AssertionError: The data directory was not cleaned up on unload after 
a failed core reload
at 
__randomizedtesting.SeedInfo.seed([DC15D22537052A84:A7DC70691427F885]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.admin.CoreAdminHandlerTest.testDeleteInstanceDirAfterCreateFailure(CoreAdminHandlerTest.java:334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11814 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.CoreAdminHandlerTest
   [junit4]   2> Creatin

[jira] [Commented] (LUCENE-7532) Add Lucene 6.2 file format description in codecs/lucene62/package-info.java

2016-11-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626065#comment-15626065
 ] 

Michael McCandless commented on LUCENE-7532:


Thank you [~shinichiro abe], this looks great, I'll push soon.

> Add Lucene 6.2 file format description in codecs/lucene62/package-info.java
> ---
>
> Key: LUCENE-7532
> URL: https://issues.apache.org/jira/browse/LUCENE-7532
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 6.2
>Reporter: Shinichiro Abe
> Attachments: LUCENE-7532.patch
>
>
> Currently that description is missing at branch_6x so I'd like to restore it.
> User feedback: http://markmail.org/message/hxtxzue7qn6ne6vz



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9077) Streaming expressions should support collection alias

2016-11-01 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626072#comment-15626072
 ] 

Kevin Risden commented on SOLR-9077:


Currently the matching logic is as follows:
 - collection name case sensitive
 - collection name case insensitive
 - alias name case sensitive

This means that current behavior for TopicStream, FeatureSelectionStream, and 
TextLogitStream would be unaffected and would just add the alias support. Are 
there any known downsides to using aliases other than maybe complicating 
things? It looks to me that slices are what is used for the checkpoints so that 
would just work with aliases getting more slices. 

I'll double check the tests for TopicStream, FeatureSelectionStream, and 
TextLogitStream to make sure those are being tested with aliases.

One item that I might adjust with the tests is to make sure aliases are 
pointing to multiple collections. Currently the alias only points to a single 
collection.

> Streaming expressions should support collection alias
> -
>
> Key: SOLR-9077
> URL: https://issues.apache.org/jira/browse/SOLR-9077
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.5.1
>Reporter: Suds
>Priority: Minor
> Attachments: SOLR-9077.patch, SOLR-9077.patch, SOLR-9077.patch, 
> SOLR-9077.patch
>
>
> Streaming expression in solr does not support collection alias
> when I tried to access collection alias I get null pointer exception 
> issue seems to be related to following code , clusterState.getActiveSlices 
> returns null 
>  Collection slices = clusterState.getActiveSlices(this.collection);
>  for(Slice slice : slices) {
> }
> fix seems to fairly simple , clusterState.getActiveSlices can be made aware 
> of collection alias. I am not sure what will happen when we have large alias 
> which has hundred of slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9146) Parallel SQL engine should support >, >=, <, <=, <>, != syntax

2016-11-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626087#comment-15626087
 ] 

Cassandra Targett commented on SOLR-9146:
-

[~risdenk]: I added a section to the Solr Ref Guide about supported WHERE 
operators at 
https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface#ParallelSQLInterface-SupportedWHEREOperators.
 However, I did not include '!=' because I didn't see it in the 
TestSQLHandler.java test. Let me know if it is supported even if it is not 
tested. If I should add other operators that are not supported, please let me 
know - I used a general list of operators and some feedback from the page 
comments.

Due to time constraints, I did not try to cover every scenario demonstrated in 
the test, just wanted to add a section based on this specific issue.

> Parallel SQL engine should support >, >=, <, <=, <>, != syntax
> --
>
> Key: SOLR-9146
> URL: https://issues.apache.org/jira/browse/SOLR-9146
> Project: Solr
>  Issue Type: New Feature
>  Components: Parallel SQL
>Reporter: Timothy Potter
>Assignee: Kevin Risden
> Fix For: 6.3
>
> Attachments: SOLR-9146.patch, SOLR-9146.patch
>
>
> this gives expected result:
> {code}
>  SELECT title_s, COUNT(*) as cnt
> FROM movielens
>  WHERE genre_ss='action' AND rating_i='[4 TO 5]'
> GROUP BY title_s
> ORDER BY cnt desc
>  LIMIT 5
> {code}
> but using >= 4 doesn't give same results (my ratings are 1-5):
> {code}
>   SELECT title_s, COUNT(*) as cnt
>  FROM movielens
>   WHERE genre_ss='action' AND rating_i >= 4
> GROUP BY title_s
> ORDER BY cnt desc
>   LIMIT 5
> {code}
> on the Solr side, I see queries forumlated as:
> {code}
> 2016-05-21 14:53:43.096 INFO  (qtp1435804085-1419) [c:movielens
> s:shard1 r:core_node1 x:movielens_shard1_replica1] o.a.s.c.S.Request
> [movielens_shard1_replica1]  webapp=/solr path=/export
> params={q=((genre_ss:"action")+AND+(rating_i:"4"))&distrib=false&fl=title_s&sort=title_s+desc&wt=json&version=2.2}
> hits=2044 status=0 QTime=0
> {code}
> which is obviously wrong ... 
> In general, rather than crafting an incorrect query that gives the
> wrong results, we should throw an exception stating that the syntax is
> not supported.
> Also, the ref guide should be updated to contain a known limitations section 
> so users don't have to guess at what SQL features are supported by Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9146) Parallel SQL engine should support >, >=, <, <=, <>, != syntax

2016-11-01 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626102#comment-15626102
 ] 

Kevin Risden commented on SOLR-9146:


Thanks [~ctargett]! I'll double check the ref guide page. There are some 
caveats right now to the WHERE support (like one side needs to be a field.) 
Both sides being constants (5 < 10) or being fields (fielda > fieldb) are not 
supported. This hopefully gets addressed with SOLR-8593. I can add this to the 
ref guide page. I'm pretty sure '!=' works (even though it is not standard sql 
like <>) I'll double check.

> Parallel SQL engine should support >, >=, <, <=, <>, != syntax
> --
>
> Key: SOLR-9146
> URL: https://issues.apache.org/jira/browse/SOLR-9146
> Project: Solr
>  Issue Type: New Feature
>  Components: Parallel SQL
>Reporter: Timothy Potter
>Assignee: Kevin Risden
> Fix For: 6.3
>
> Attachments: SOLR-9146.patch, SOLR-9146.patch
>
>
> this gives expected result:
> {code}
>  SELECT title_s, COUNT(*) as cnt
> FROM movielens
>  WHERE genre_ss='action' AND rating_i='[4 TO 5]'
> GROUP BY title_s
> ORDER BY cnt desc
>  LIMIT 5
> {code}
> but using >= 4 doesn't give same results (my ratings are 1-5):
> {code}
>   SELECT title_s, COUNT(*) as cnt
>  FROM movielens
>   WHERE genre_ss='action' AND rating_i >= 4
> GROUP BY title_s
> ORDER BY cnt desc
>   LIMIT 5
> {code}
> on the Solr side, I see queries forumlated as:
> {code}
> 2016-05-21 14:53:43.096 INFO  (qtp1435804085-1419) [c:movielens
> s:shard1 r:core_node1 x:movielens_shard1_replica1] o.a.s.c.S.Request
> [movielens_shard1_replica1]  webapp=/solr path=/export
> params={q=((genre_ss:"action")+AND+(rating_i:"4"))&distrib=false&fl=title_s&sort=title_s+desc&wt=json&version=2.2}
> hits=2044 status=0 QTime=0
> {code}
> which is obviously wrong ... 
> In general, rather than crafting an incorrect query that gives the
> wrong results, we should throw an exception stating that the syntax is
> not supported.
> Also, the ref guide should be updated to contain a known limitations section 
> so users don't have to guess at what SQL features are supported by Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8542) Integrate Learning to Rank into Solr

2016-11-01 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8542:
--
Attachment: SOLR-8542.patch

Attaching patch generated as diff between 'master' and 
https://github.com/apache/lucene-solr/tree/jira/solr-8542-v2 - master commit to 
follow shortly.

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch, 
> SOLR-8542.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9146) Parallel SQL engine should support >, >=, <, <=, <>, != syntax

2016-11-01 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626134#comment-15626134
 ] 

Kevin Risden commented on SOLR-9146:


Expanded the supported where operators to include some examples. Also added a 
note about the where clause requiring a field on one side of the predicate.

> Parallel SQL engine should support >, >=, <, <=, <>, != syntax
> --
>
> Key: SOLR-9146
> URL: https://issues.apache.org/jira/browse/SOLR-9146
> Project: Solr
>  Issue Type: New Feature
>  Components: Parallel SQL
>Reporter: Timothy Potter
>Assignee: Kevin Risden
> Fix For: 6.3
>
> Attachments: SOLR-9146.patch, SOLR-9146.patch
>
>
> this gives expected result:
> {code}
>  SELECT title_s, COUNT(*) as cnt
> FROM movielens
>  WHERE genre_ss='action' AND rating_i='[4 TO 5]'
> GROUP BY title_s
> ORDER BY cnt desc
>  LIMIT 5
> {code}
> but using >= 4 doesn't give same results (my ratings are 1-5):
> {code}
>   SELECT title_s, COUNT(*) as cnt
>  FROM movielens
>   WHERE genre_ss='action' AND rating_i >= 4
> GROUP BY title_s
> ORDER BY cnt desc
>   LIMIT 5
> {code}
> on the Solr side, I see queries forumlated as:
> {code}
> 2016-05-21 14:53:43.096 INFO  (qtp1435804085-1419) [c:movielens
> s:shard1 r:core_node1 x:movielens_shard1_replica1] o.a.s.c.S.Request
> [movielens_shard1_replica1]  webapp=/solr path=/export
> params={q=((genre_ss:"action")+AND+(rating_i:"4"))&distrib=false&fl=title_s&sort=title_s+desc&wt=json&version=2.2}
> hits=2044 status=0 QTime=0
> {code}
> which is obviously wrong ... 
> In general, rather than crafting an incorrect query that gives the
> wrong results, we should throw an exception stating that the syntax is
> not supported.
> Also, the ref guide should be updated to contain a known limitations section 
> so users don't have to guess at what SQL features are supported by Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7532) Add Lucene 6.2 file format description in codecs/lucene62/package-info.java

2016-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626172#comment-15626172
 ] 

ASF subversion and git services commented on LUCENE-7532:
-

Commit 830a3fdfbf12553d47f8f6c320c862aa88fdc48b in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=830a3fd ]

LUCENE-7532: add back lost codec file format documentation


> Add Lucene 6.2 file format description in codecs/lucene62/package-info.java
> ---
>
> Key: LUCENE-7532
> URL: https://issues.apache.org/jira/browse/LUCENE-7532
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 6.2
>Reporter: Shinichiro Abe
> Fix For: 6.4
>
> Attachments: LUCENE-7532.patch
>
>
> Currently that description is missing at branch_6x so I'd like to restore it.
> User feedback: http://markmail.org/message/hxtxzue7qn6ne6vz



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7532) Add Lucene 6.2 file format description in codecs/lucene62/package-info.java

2016-11-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7532.

   Resolution: Fixed
Fix Version/s: 6.4

Thanks [~shinichiro abe]!

> Add Lucene 6.2 file format description in codecs/lucene62/package-info.java
> ---
>
> Key: LUCENE-7532
> URL: https://issues.apache.org/jira/browse/LUCENE-7532
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 6.2
>Reporter: Shinichiro Abe
> Fix For: 6.4
>
> Attachments: LUCENE-7532.patch
>
>
> Currently that description is missing at branch_6x so I'd like to restore it.
> User feedback: http://markmail.org/message/hxtxzue7qn6ne6vz



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.3-Windows (32bit/jdk1.8.0_102) - Build # 2 - Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.3-Windows/2/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<152> but was:<138>

Stack Trace:
java.lang.AssertionError: expected:<152> but was:<138>
at 
__randomizedtesting.SeedInfo.seed([521E3B1694E30B28:DA4A04CC3A1F66D0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:280)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAf

[jira] [Commented] (SOLR-9146) Parallel SQL engine should support >, >=, <, <=, <>, != syntax

2016-11-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626307#comment-15626307
 ] 

Cassandra Targett commented on SOLR-9146:
-

Thanks so much for your help. 

> Parallel SQL engine should support >, >=, <, <=, <>, != syntax
> --
>
> Key: SOLR-9146
> URL: https://issues.apache.org/jira/browse/SOLR-9146
> Project: Solr
>  Issue Type: New Feature
>  Components: Parallel SQL
>Reporter: Timothy Potter
>Assignee: Kevin Risden
> Fix For: 6.3
>
> Attachments: SOLR-9146.patch, SOLR-9146.patch
>
>
> this gives expected result:
> {code}
>  SELECT title_s, COUNT(*) as cnt
> FROM movielens
>  WHERE genre_ss='action' AND rating_i='[4 TO 5]'
> GROUP BY title_s
> ORDER BY cnt desc
>  LIMIT 5
> {code}
> but using >= 4 doesn't give same results (my ratings are 1-5):
> {code}
>   SELECT title_s, COUNT(*) as cnt
>  FROM movielens
>   WHERE genre_ss='action' AND rating_i >= 4
> GROUP BY title_s
> ORDER BY cnt desc
>   LIMIT 5
> {code}
> on the Solr side, I see queries forumlated as:
> {code}
> 2016-05-21 14:53:43.096 INFO  (qtp1435804085-1419) [c:movielens
> s:shard1 r:core_node1 x:movielens_shard1_replica1] o.a.s.c.S.Request
> [movielens_shard1_replica1]  webapp=/solr path=/export
> params={q=((genre_ss:"action")+AND+(rating_i:"4"))&distrib=false&fl=title_s&sort=title_s+desc&wt=json&version=2.2}
> hits=2044 status=0 QTime=0
> {code}
> which is obviously wrong ... 
> In general, rather than crafting an incorrect query that gives the
> wrong results, we should throw an exception stating that the syntax is
> not supported.
> Also, the ref guide should be updated to contain a known limitations section 
> so users don't have to guess at what SQL features are supported by Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-11-01 Thread Walter Underwood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626331#comment-15626331
 ] 

Walter Underwood commented on SOLR-4735:


Anybody using the CodaHale metrics.jetty9.InstrumentedHandler? It looks a lot 
like something we built for our own use with Solr 4.

http://metrics.dropwizard.io/3.1.0/manual/jetty/
http://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/jetty9/InstrumentedHandler.html

wunder


> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale&subj=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-11-01 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626442#comment-15626442
 ] 

Jeff Wartes commented on SOLR-4735:
---

I have, and am, by instantiating a SharedMetricRegistry and GraphiteReporter 
directly in the jetty.xml. (Which is hacky, but in lieu of SOLR-8785, does work 
fine) 
I'm also using the logging and JVM metrics plugins quite happily.



> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale&subj=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626456#comment-15626456
 ] 

ASF subversion and git services commented on SOLR-8542:
---

Commit 5a66b3bc089e4b3e73b1c41c4cdcd89b183b85e7 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a66b3b ]

SOLR-8542: Adds Solr Learning to Rank (LTR) plugin for reranking results with 
machine learning models. (Michael Nilsson, Diego Ceccarelli, Joshua Pantony, 
Jon Dorando, Naveen Santhapuri, Alessandro Benedetti, David Grohmann, Christine 
Poerschke)


> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch, 
> SOLR-8542.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9077) Streaming expressions should support collection alias

2016-11-01 Thread Suds (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626500#comment-15626500
 ] 

Suds commented on SOLR-9077:


One issue I faced is I was working with large cluster ~40 nodes (40 shards) in 
that case it would have  many slices per alias not sure if we need to throw 
error as it may cause issues while creating fixed size threadpool if no of 
slices > some number?

> Streaming expressions should support collection alias
> -
>
> Key: SOLR-9077
> URL: https://issues.apache.org/jira/browse/SOLR-9077
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.5.1
>Reporter: Suds
>Priority: Minor
> Attachments: SOLR-9077.patch, SOLR-9077.patch, SOLR-9077.patch, 
> SOLR-9077.patch
>
>
> Streaming expression in solr does not support collection alias
> when I tried to access collection alias I get null pointer exception 
> issue seems to be related to following code , clusterState.getActiveSlices 
> returns null 
>  Collection slices = clusterState.getActiveSlices(this.collection);
>  for(Slice slice : slices) {
> }
> fix seems to fairly simple , clusterState.getActiveSlices can be made aware 
> of collection alias. I am not sure what will happen when we have large alias 
> which has hundred of slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 555 - Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/555/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([42553215307F2E66:BB18A1BA0C0A63EC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:284)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[jira] [Created] (SOLR-9710) SpellCheckComponentTest (still) occasionally fails

2016-11-01 Thread James Dyer (JIRA)
James Dyer created SOLR-9710:


 Summary: SpellCheckComponentTest (still) occasionally fails
 Key: SOLR-9710
 URL: https://issues.apache.org/jira/browse/SOLR-9710
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spellchecker
Affects Versions: 6.2.1
Reporter: James Dyer
Assignee: James Dyer
Priority: Minor
 Fix For: 6.3


In December 2015, I addressed occasional, non-reproducable failures with the 
Spellcheck Component tests.  These were failing with this warning:

bq. PERFORMANCE WARNING: Overlapping onDeckSearchers=2

...and the test itself would run before the test data was committed, resulting 
in failure.

This problem is re-occurring and needs a better fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9710) SpellCheckComponentTest (still) occasionally fails

2016-11-01 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626662#comment-15626662
 ] 

James Dyer commented on SOLR-9710:
--

I tried to remedy this problem by adding this to the appropriate solrconfig.xml:

{code:xml}

false
1
 SpellCheckComponentTest (still) occasionally fails
> --
>
> Key: SOLR-9710
> URL: https://issues.apache.org/jira/browse/SOLR-9710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 6.2.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 6.3
>
>
> In December 2015, I addressed occasional, non-reproducable failures with the 
> Spellcheck Component tests.  These were failing with this warning:
> bq. PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> ...and the test itself would run before the test data was committed, 
> resulting in failure.
> This problem is re-occurring and needs a better fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9710) SpellCheckComponentTest (still) occasionally fails

2016-11-01 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626665#comment-15626665
 ] 

James Dyer commented on SOLR-9710:
--

Recent test failures:

https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/938/
https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2021/
https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2017/
https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1982/
https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18081/

> SpellCheckComponentTest (still) occasionally fails
> --
>
> Key: SOLR-9710
> URL: https://issues.apache.org/jira/browse/SOLR-9710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 6.2.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 6.3
>
>
> In December 2015, I addressed occasional, non-reproducable failures with the 
> Spellcheck Component tests.  These were failing with this warning:
> bq. PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> ...and the test itself would run before the test data was committed, 
> resulting in failure.
> This problem is re-occurring and needs a better fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9710) SpellCheckComponentTest (still) occasionally fails

2016-11-01 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-9710:
-
Attachment: SOLR-9710.patch

I believe this ([^SOLR-9710.patch]) is all that is needed to fix this.  But if 
you know better, please speak up.

> SpellCheckComponentTest (still) occasionally fails
> --
>
> Key: SOLR-9710
> URL: https://issues.apache.org/jira/browse/SOLR-9710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 6.2.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-9710.patch
>
>
> In December 2015, I addressed occasional, non-reproducable failures with the 
> Spellcheck Component tests.  These were failing with this warning:
> bq. PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> ...and the test itself would run before the test data was committed, 
> resulting in failure.
> This problem is re-occurring and needs a better fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.3.0 RC2

2016-11-01 Thread Michael McCandless
I think there's possibly a bad problem with LUCENE-7501 (creates a
possibly corrupt index) ... I'm trying to dig more to be sure ...

Mike McCandless

http://blog.mikemccandless.com

On Mon, Oct 31, 2016 at 2:28 PM, Shalin Shekhar Mangar
 wrote:
> Please vote for the second release candidate for Lucene/Solr 6.3.0
>
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.3.0-RC2-rev1fe1a54db32b8c27bfae81887cd4d75242090613/
>
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.3.0-RC2-rev1fe1a54db32b8c27bfae81887cd4d75242090613/
>
> Smoke tester passed for me:
> SUCCESS! [0:35:05.847870]
>
> Here's my +1 to release.
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9077) Streaming expressions should support collection alias

2016-11-01 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626756#comment-15626756
 ] 

Joel Bernstein commented on SOLR-9077:
--

The problem with aliases with Topics is that encourages using the alias as a 
pointer to the collection. When someone reassigns the alias to a different 
collection, it will break the topic even if the data is the same. I think it 
makes sense to treat topics as an outlier and not support aliases for it.



> Streaming expressions should support collection alias
> -
>
> Key: SOLR-9077
> URL: https://issues.apache.org/jira/browse/SOLR-9077
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.5.1
>Reporter: Suds
>Priority: Minor
> Attachments: SOLR-9077.patch, SOLR-9077.patch, SOLR-9077.patch, 
> SOLR-9077.patch
>
>
> Streaming expression in solr does not support collection alias
> when I tried to access collection alias I get null pointer exception 
> issue seems to be related to following code , clusterState.getActiveSlices 
> returns null 
>  Collection slices = clusterState.getActiveSlices(this.collection);
>  for(Slice slice : slices) {
> }
> fix seems to fairly simple , clusterState.getActiveSlices can be made aware 
> of collection alias. I am not sure what will happen when we have large alias 
> which has hundred of slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9711) Build parameter to silence Changes.html generation error if SOLR Jira is not accessible

2016-11-01 Thread Mano Kovacs (JIRA)
Mano Kovacs created SOLR-9711:
-

 Summary: Build parameter to silence Changes.html generation error 
if SOLR Jira is not accessible
 Key: SOLR-9711
 URL: https://issues.apache.org/jira/browse/SOLR-9711
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mano Kovacs


In case the build is running behind firewall with no access to 
issues.apache.org, generation of Changes.html fails, failing the entire build.

Supporting a -DignoreBuildChangeError parameter, that skips generation of html 
file upon network, would solve the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9711) Build parameter to silence Changes.html generation error if SOLR Jira is not accessible

2016-11-01 Thread Mano Kovacs (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mano Kovacs updated SOLR-9711:
--
Description: 
In case the build is running behind firewall with no access to 
issues.apache.org, generation of Changes.html fails, failing the entire build.

Supporting a -Dchanges.ignoreError parameter, that skips generation of html 
file upon network, would solve the issue.

  was:
In case the build is running behind firewall with no access to 
issues.apache.org, generation of Changes.html fails, failing the entire build.

Supporting a -DignoreBuildChangeError parameter, that skips generation of html 
file upon network, would solve the issue.


> Build parameter to silence Changes.html generation error if SOLR Jira is not 
> accessible
> ---
>
> Key: SOLR-9711
> URL: https://issues.apache.org/jira/browse/SOLR-9711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>  Labels: build
>
> In case the build is running behind firewall with no access to 
> issues.apache.org, generation of Changes.html fails, failing the entire build.
> Supporting a -Dchanges.ignoreError parameter, that skips generation of html 
> file upon network, would solve the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-11-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626839#comment-15626839
 ] 

Jan Høydahl commented on SOLR-9481:
---

Perhaps as a first step we create a new JIRA to fix this for tests, I don't 
know exactly how but looks like something should be done...

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9711) Build parameter to silence Changes.html generation error if SOLR Jira is not accessible

2016-11-01 Thread Mano Kovacs (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mano Kovacs updated SOLR-9711:
--
Attachment: SOLR-9711.patch

> Build parameter to silence Changes.html generation error if SOLR Jira is not 
> accessible
> ---
>
> Key: SOLR-9711
> URL: https://issues.apache.org/jira/browse/SOLR-9711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>  Labels: build
> Attachments: SOLR-9711.patch
>
>
> In case the build is running behind firewall with no access to 
> issues.apache.org, generation of Changes.html fails, failing the entire build.
> Supporting a -Dchanges.ignoreError parameter, that skips generation of html 
> file upon network, would solve the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9711) Build parameter to silence Changes.html generation error if SOLR Jira is not accessible

2016-11-01 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626997#comment-15626997
 ] 

Steve Rowe commented on SOLR-9711:
--

I can't find where (maybe not in JIRA?) but [~hossman] advocated moving 
Lucene's and Solr's {{doap.rdf}} files, which contain all of the release dates 
that the {{changes-to-html}} ant task now pulls from JIRA, from the CMS 
Subversion repository (downloadable from the website at 
[http://lucene.apache.org/core/doap.rdf] and 
[http://lucene.apache.org/solr/doap.rdf]) to the Lucene/Solr git repository.  
If we did that, then the process would be entirely offline.

> Build parameter to silence Changes.html generation error if SOLR Jira is not 
> accessible
> ---
>
> Key: SOLR-9711
> URL: https://issues.apache.org/jira/browse/SOLR-9711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>  Labels: build
> Attachments: SOLR-9711.patch
>
>
> In case the build is running behind firewall with no access to 
> issues.apache.org, generation of Changes.html fails, failing the entire build.
> Supporting a -Dchanges.ignoreError parameter, that skips generation of html 
> file upon network, would solve the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9711) Build parameter to silence Changes.html generation error if SOLR Jira is not accessible

2016-11-01 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626997#comment-15626997
 ] 

Steve Rowe edited comment on SOLR-9711 at 11/1/16 11:14 PM:


I can't find where (maybe not in JIRA?) but [~hossman] advocated moving 
Lucene's and Solr's {{doap.rdf}} files, which contain all of the release dates 
that the {{changes-to-html}} ant task now pulls from JIRA, from the CMS 
Subversion repository (downloadable from the website at 
[http://lucene.apache.org/core/doap.rdf] and 
[http://lucene.apache.org/solr/doap.rdf]) to the Lucene/Solr git repository.  
If we did that, then the process could be entirely offline if release dates 
were taken from the local {{doap.rdf}} files instead of downloaded from JIRA..


was (Author: steve_rowe):
I can't find where (maybe not in JIRA?) but [~hossman] advocated moving 
Lucene's and Solr's {{doap.rdf}} files, which contain all of the release dates 
that the {{changes-to-html}} ant task now pulls from JIRA, from the CMS 
Subversion repository (downloadable from the website at 
[http://lucene.apache.org/core/doap.rdf] and 
[http://lucene.apache.org/solr/doap.rdf]) to the Lucene/Solr git repository.  
If we did that, then the process would be entirely offline.

> Build parameter to silence Changes.html generation error if SOLR Jira is not 
> accessible
> ---
>
> Key: SOLR-9711
> URL: https://issues.apache.org/jira/browse/SOLR-9711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>  Labels: build
> Attachments: SOLR-9711.patch
>
>
> In case the build is running behind firewall with no access to 
> issues.apache.org, generation of Changes.html fails, failing the entire build.
> Supporting a -Dchanges.ignoreError parameter, that skips generation of html 
> file upon network, would solve the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7501) Do not encode the split dimension in the index in the 1D case

2016-11-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7501:
---
Priority: Blocker  (was: Minor)

> Do not encode the split dimension in the index in the 1D case
> -
>
> Key: LUCENE-7501
> URL: https://issues.apache.org/jira/browse/LUCENE-7501
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7501.patch
>
>
> When there is a single dimension, the split dimension is always 0, so we do 
> not need to encode it in the index of the BKD tree. This would be 33% memory 
> saving for half floats, 20% for ints/floats, 11% for longs/doubles and 6% for 
> ip addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-7501) Do not encode the split dimension in the index in the 1D case

2016-11-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reopened LUCENE-7501:


I think there's a small back compat bug here, that looks like corruption.

I'll attach a quick patch, but maybe there's a cleaner way e.g. to factor out 
the address++ / splitDim logic.

I also hit some silly crabs in CheckIndex.

> Do not encode the split dimension in the index in the 1D case
> -
>
> Key: LUCENE-7501
> URL: https://issues.apache.org/jira/browse/LUCENE-7501
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7501.patch
>
>
> When there is a single dimension, the split dimension is always 0, so we do 
> not need to encode it in the index of the BKD tree. This would be 33% memory 
> saving for half floats, 20% for ints/floats, 11% for longs/doubles and 6% for 
> ip addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7501) Do not encode the split dimension in the index in the 1D case

2016-11-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627021#comment-15627021
 ] 

Michael McCandless commented on LUCENE-7501:


I think we have to fix this for 6.3.0.

> Do not encode the split dimension in the index in the 1D case
> -
>
> Key: LUCENE-7501
> URL: https://issues.apache.org/jira/browse/LUCENE-7501
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7501.patch
>
>
> When there is a single dimension, the split dimension is always 0, so we do 
> not need to encode it in the index of the BKD tree. This would be 33% memory 
> saving for half floats, 20% for ints/floats, 11% for longs/doubles and 6% for 
> ip addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7501) Do not encode the split dimension in the index in the 1D case

2016-11-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7501:
---
Attachment: LUCENE-7501.patch

Patch.

> Do not encode the split dimension in the index in the 1D case
> -
>
> Key: LUCENE-7501
> URL: https://issues.apache.org/jira/browse/LUCENE-7501
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7501.patch, LUCENE-7501.patch
>
>
> When there is a single dimension, the split dimension is always 0, so we do 
> not need to encode it in the index of the BKD tree. This would be 33% memory 
> saving for half floats, 20% for ints/floats, 11% for longs/doubles and 6% for 
> ip addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7533) Classic query parser: autoGeneratePhraseQueries=true doesn't work when splitOnWhitespace=false

2016-11-01 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7533:
--

 Summary: Classic query parser: autoGeneratePhraseQueries=true 
doesn't work when splitOnWhitespace=false
 Key: LUCENE-7533
 URL: https://issues.apache.org/jira/browse/LUCENE-7533
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 6.2.1, 6.2, 6.3
Reporter: Steve Rowe


LUCENE-2605 introduced the classic query parser option to not split on 
whitespace prior to performing analysis.

When splitOnWhitespace=false, the output from analysis can now come from 
multiple whitespace-separated tokens, which breaks code assumptions when 
autoGeneratePhraseQueries=true: for this combination of options, it's not 
appropriate to auto-quote multiple non-overlapping tokens produced by analysis. 
 E.g. simple whitespace tokenization over the query "some words" will produce 
the token sequence ("some", "words"), and even when 
autoGeneratePhraseQueries=true, we should not be creating a phrase query here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Merge indexes with CoreAdminHandler - possible issue

2016-11-01 Thread Alexey Timofeev
Hello!

I am stuck with CoreAdminHandler's merge indexes functionality. It looks
like merge indexes behaves weird if we use it with "srcCore" parameter. (If
we use "indexDir" parameter then it works fine.) Please tell me if it's
real bug in Solr or I am using it wrong? If community admits that it's bug
indeed, then let's create ticket for it and I will be happy to suggest
patch to fix it.

Now let me explain what I think is wrong with merging indexes using "srcCore"
parameter. Trouble is that when merge code starts to merge doc values
fields it mistakenly determines all fields that can be uninverted to be doc
values fields. That results in uninverting of all uninvert-able fields and
writing result in resulting index. Thus, memory consumption is huge, chance
of OOM is big, resulting index is bloated.

Now, why merge code considers all uninvert-able fields to be doc values?
It's because it considers all fields where (FieldInfo.docValuesType !=
DocValuesType.NONE) to be doc values. FieldInfo objects are provided by
IndexReader and as we always have UninvertingReader in chain of readers
then we always get FieldInfo.docValuesType to be doc values type to which
that field can be converted. Thus, we almost always have
(FieldInfo.docValuesType
!= DocValuesType.NONE) and uninvert almost all fields.

Are there a way to create core without UninvertingReader in chain of
readers? If so then is it expected way of usage or just workaround?

If loading core without UninvertingReader is not what I meant to do then I
would suggest to consult schema to find out what fields are doc values
instead of relying on FieldInfo.docValuesType.

Thank you in advance. Looking forward to your replies!

-- 
Regards.


[jira] [Updated] (LUCENE-7533) Classic query parser: autoGeneratePhraseQueries=true doesn't work when splitOnWhitespace=false

2016-11-01 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7533:
---
Attachment: LUCENE-7533.patch

Patch that addresses some of this issue, with some failing tests and nocommits.

The existing autoGeneratePhraseQueries=true approach generates queries exactly 
as if the query had contained quotation marks, but as I mentioned above, this 
is inappropriate when splitOnWhitespace=false and the query text contains 
spaces.

The approach in the patch is to add a new QueryBuilder method to handle the 
autoGeneratePhraseQueries=true case.  The query text is split on whitespace and 
these tokens' offsets are compared to those produced by the configured 
analyzer.  When multiple non-overlapping tokens have offsets within the bounds 
of a single whitespace-separated token, a phrase query is created.  If the 
original token is present as a token overlapping with the first split token, 
then a disjunction query is created with the original token and the phrase 
query of the split tokens.

I've added a couple of tests that show posincr/poslength/offset output from 
SynonymFilter and WordDelimiterFilter (likely the two most frequently used 
analysis components that can create split tokens), and both create corrupt 
token graphs of various kinds (e.g. LUCENE-6582, LUCENE-5051), so solving this 
problem in a complete way just isn't possible right now.

So I'm not happy with the approach in the patch.  It only covers a subset of 
possible token graphs (e.g. more than one overlapping multi-term synonym 
doesn't work).  And it's a lot of new code solving a problem that AFAIK no user 
has reported (does anybody even use autoGeneratePhraseQueries=true with classic 
QP?),

I'd be much happier if we could somehow get TermAutomatonQuery hooked into the 
query parsers, and then rewrite to simpler queries if possible: LUCENE-6824.  
First thing though is unbreaking SynonymFilter and friends to produce 
non-broken token graphs though.  Attempts to do this for SynonymFilter have 
stalled though: LUCENE-6664.  (I have a germ of an idea that might break the 
logjam - I'll post over there.)

For this issue, maybe instead of my patch, for now, we just disallow 
autoGeneratePhraseQueries=true when splitOnWhitespace=false.

Thoughts?

> Classic query parser: autoGeneratePhraseQueries=true doesn't work when 
> splitOnWhitespace=false
> --
>
> Key: LUCENE-7533
> URL: https://issues.apache.org/jira/browse/LUCENE-7533
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.2, 6.3, 6.2.1
>Reporter: Steve Rowe
> Attachments: LUCENE-7533.patch
>
>
> LUCENE-2605 introduced the classic query parser option to not split on 
> whitespace prior to performing analysis.
> When splitOnWhitespace=false, the output from analysis can now come from 
> multiple whitespace-separated tokens, which breaks code assumptions when 
> autoGeneratePhraseQueries=true: for this combination of options, it's not 
> appropriate to auto-quote multiple non-overlapping tokens produced by 
> analysis.  E.g. simple whitespace tokenization over the query "some words" 
> will produce the token sequence ("some", "words"), and even when 
> autoGeneratePhraseQueries=true, we should not be creating a phrase query here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7533) Classic query parser: autoGeneratePhraseQueries=true doesn't work when splitOnWhitespace=false

2016-11-01 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627142#comment-15627142
 ] 

Steve Rowe commented on LUCENE-7533:


FYI autoGeneratePhraseQueries was never added to the flexible query parser.

> Classic query parser: autoGeneratePhraseQueries=true doesn't work when 
> splitOnWhitespace=false
> --
>
> Key: LUCENE-7533
> URL: https://issues.apache.org/jira/browse/LUCENE-7533
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.2, 6.3, 6.2.1
>Reporter: Steve Rowe
> Attachments: LUCENE-7533.patch
>
>
> LUCENE-2605 introduced the classic query parser option to not split on 
> whitespace prior to performing analysis.
> When splitOnWhitespace=false, the output from analysis can now come from 
> multiple whitespace-separated tokens, which breaks code assumptions when 
> autoGeneratePhraseQueries=true: for this combination of options, it's not 
> appropriate to auto-quote multiple non-overlapping tokens produced by 
> analysis.  E.g. simple whitespace tokenization over the query "some words" 
> will produce the token sequence ("some", "words"), and even when 
> autoGeneratePhraseQueries=true, we should not be creating a phrase query here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7533) Classic query parser: autoGeneratePhraseQueries=true doesn't work when splitOnWhitespace=false

2016-11-01 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7533:
---
Description: 
LUCENE-2605 introduced the classic query parser option to not split on 
whitespace prior to performing analysis.

>From the javadocs for QueryParser.setAutoGeneratePhraseQueries(): 
bq.phrase queries will be automatically generated when the analyzer returns 
more than one term from whitespace delimited text.

When splitOnWhitespace=false, the output from analysis can now come from 
multiple whitespace-separated tokens, which breaks code assumptions when 
autoGeneratePhraseQueries=true: for this combination of options, it's not 
appropriate to auto-quote multiple non-overlapping tokens produced by analysis. 
 E.g. simple whitespace tokenization over the query "some words" will produce 
the token sequence ("some", "words"), and even when 
autoGeneratePhraseQueries=true, we should not be creating a phrase query here.

  was:
LUCENE-2605 introduced the classic query parser option to not split on 
whitespace prior to performing analysis.

When splitOnWhitespace=false, the output from analysis can now come from 
multiple whitespace-separated tokens, which breaks code assumptions when 
autoGeneratePhraseQueries=true: for this combination of options, it's not 
appropriate to auto-quote multiple non-overlapping tokens produced by analysis. 
 E.g. simple whitespace tokenization over the query "some words" will produce 
the token sequence ("some", "words"), and even when 
autoGeneratePhraseQueries=true, we should not be creating a phrase query here.


> Classic query parser: autoGeneratePhraseQueries=true doesn't work when 
> splitOnWhitespace=false
> --
>
> Key: LUCENE-7533
> URL: https://issues.apache.org/jira/browse/LUCENE-7533
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.2, 6.3, 6.2.1
>Reporter: Steve Rowe
> Attachments: LUCENE-7533.patch
>
>
> LUCENE-2605 introduced the classic query parser option to not split on 
> whitespace prior to performing analysis.
> From the javadocs for QueryParser.setAutoGeneratePhraseQueries(): 
> bq.phrase queries will be automatically generated when the analyzer returns 
> more than one term from whitespace delimited text.
> When splitOnWhitespace=false, the output from analysis can now come from 
> multiple whitespace-separated tokens, which breaks code assumptions when 
> autoGeneratePhraseQueries=true: for this combination of options, it's not 
> appropriate to auto-quote multiple non-overlapping tokens produced by 
> analysis.  E.g. simple whitespace tokenization over the query "some words" 
> will produce the token sequence ("some", "words"), and even when 
> autoGeneratePhraseQueries=true, we should not be creating a phrase query here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 18194 - Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18194/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestDownShardTolerantSearch.searchingShouldFailWithoutTolerantSearchSetToTrue

Error Message:
Error message from server should have the name of the down shard

Stack Trace:
java.lang.AssertionError: Error message from server should have the name of the 
down shard
at 
__randomizedtesting.SeedInfo.seed([3275B3C4ABBF7498:A512CFAE1D15F9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestDownShardTolerantSearch.searchingShouldFailWithoutTolerantSearchSetToTrue(TestDownShardTolerantSearch.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 191 - Still Unstable

2016-11-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/191/

4 tests failed.
FAILED:  
org.apache.solr.core.HdfsDirectoryFactoryTest.testInitArgsOrSysPropConfig

Error Message:
The max direct memory is likely too low.  Either increase it (by adding 
-XX:MaxDirectMemorySize=g -XX:+UseLargePages to your containers startup 
args) or disable direct allocation using 
solr.hdfs.blockcache.direct.memory.allocation=false in solrconfig.xml. If you 
are putting the block cache on the heap, your java heap size might not be large 
enough. Failed allocating ~134.217728 MB.

Stack Trace:
java.lang.RuntimeException: The max direct memory is likely too low.  Either 
increase it (by adding -XX:MaxDirectMemorySize=g -XX:+UseLargePages to 
your containers startup args) or disable direct allocation using 
solr.hdfs.blockcache.direct.memory.allocation=false in solrconfig.xml. If you 
are putting the block cache on the heap, your java heap size might not be large 
enough. Failed allocating ~134.217728 MB.
at 
__randomizedtesting.SeedInfo.seed([F1F1B13441B49802:65E781FBC3D7229]:0)
at 
org.apache.solr.core.HdfsDirectoryFactory.createBlockCache(HdfsDirectoryFactory.java:304)
at 
org.apache.solr.core.HdfsDirectoryFactory.getBlockDirectoryCache(HdfsDirectoryFactory.java:280)
at 
org.apache.solr.core.HdfsDirectoryFactory.create(HdfsDirectoryFactory.java:220)
at 
org.apache.solr.core.HdfsDirectoryFactoryTest.testInitArgsOrSysPropConfig(HdfsDirectoryFactoryTest.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.

[jira] [Commented] (LUCENE-6664) Replace SynonymFilter with SynonymGraphFilter

2016-11-01 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627246#comment-15627246
 ] 

Steve Rowe commented on LUCENE-6664:


[~mikemccand], I think your repurposing of posincr/poslen on this issue (as 
node ids) is to enable non-lossy query parser interpretation of token streams, 
so that e.g. tokens from overlapping phrases aren't inappropriately interleaved 
in generated queries, like your wtf example on 
[LUCENE-6582|https://issues.apache.org/jira/browse/LUCENE-6582?focusedCommentId=14592501&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14592501]):

{quote}
 if I have these synonyms:
{noformat}
wtf --> what the fudge
wtf --> wow that's funny
{noformat}
And then I'm tokenizing this:
{noformat}
wtf happened
{noformat}
Before this change (today) I get this crazy sausage incorrectly
matching phrases like "wtf the fudge" and "wow happened funny":
!https://issues.apache.org/jira/secure/attachment/12740491/12740491_before.png!
But after this change, the expanded synonyms become separate paths in
the graph right? So it will look like this?:
!https://issues.apache.org/jira/secure/attachment/12740492/12740492_after.png!
{quote}

An alternative implementation idea I had, which would not change posincr/poslen 
semantics, is to add a new attribute encoding an entity ID.  Graph-aware 
producers would mark tokens that should be treated as a sequence with the same 
entity ID, and graph-aware consumers would use the entity ID to losslessly 
interpret the resulting graph.  Here's the wtf example using this scheme:

||token||posInc||posLen||entityID||
|wtf|1|3|0|
|what|0|1|1|
|wow|0|1|2|
|the|1|1|1|
|that's|0|1|2|
|fudge|1|1|1|
|funny|0|1|2|
|happened|1|1|3|

No flattening stage is required.  Non-graph-aware components aren't affected (I 
think).  And handling QueryParser.autoGeneratePhraseQueries() properly (see 
LUCENE-7533) would be easy: if more than one token has the same entityID, then 
it should be a phrase when autoGeneratePhraseQueries=true.

I haven't written any code yet, so I'm not sure this idea is feasible.

Thoughts?

> Replace SynonymFilter with SynonymGraphFilter
> -
>
> Key: LUCENE-6664
> URL: https://issues.apache.org/jira/browse/LUCENE-6664
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-6664.patch, LUCENE-6664.patch, LUCENE-6664.patch, 
> LUCENE-6664.patch, usa.png, usa_flat.png
>
>
> Spinoff from LUCENE-6582.
> I created a new SynonymGraphFilter (to replace the current buggy
> SynonymFilter), that produces correct graphs (does no "graph
> flattening" itself).  I think this makes it simpler.
> This means you must add the FlattenGraphFilter yourself, if you are
> applying synonyms during indexing.
> Index-time syn expansion is a necessarily "lossy" graph transformation
> when multi-token (input or output) synonyms are applied, because the
> index does not store {{posLength}}, so there will always be phrase
> queries that should match but do not, and then phrase queries that
> should not match but do.
> http://blog.mikemccandless.com/2012/04/lucenes-tokenstreams-are-actually.html
> goes into detail about this.
> However, with this new SynonymGraphFilter, if instead you do synonym
> expansion at query time (and don't do the flattening), and you use
> TermAutomatonQuery (future: somehow integrated into a query parser),
> or maybe just "enumerate all paths and make union of PhraseQuery", you
> should get 100% correct matches (not sure about "proper" scoring
> though...).
> This new syn filter still cannot consume an arbitrary graph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge indexes with CoreAdminHandler - possible issue

2016-11-01 Thread Erick Erickson
Alexey:

I don't know that code intimately, but we've had sporadic reports of
memory spikes during segment merging. I'd be _really_ interested in
the perspective from some of the Lucene guys, going on the assumption
that eventually the code path is the same.

Best,
Erick

On Tue, Nov 1, 2016 at 4:33 PM, Alexey Timofeev  wrote:
> Hello!
>
> I am stuck with CoreAdminHandler's merge indexes functionality. It looks
> like merge indexes behaves weird if we use it with "srcCore" parameter. (If
> we use "indexDir" parameter then it works fine.) Please tell me if it's real
> bug in Solr or I am using it wrong? If community admits that it's bug
> indeed, then let's create ticket for it and I will be happy to suggest patch
> to fix it.
>
> Now let me explain what I think is wrong with merging indexes using
> "srcCore" parameter. Trouble is that when merge code starts to merge doc
> values fields it mistakenly determines all fields that can be uninverted to
> be doc values fields. That results in uninverting of all uninvert-able
> fields and writing result in resulting index. Thus, memory consumption is
> huge, chance of OOM is big, resulting index is bloated.
>
> Now, why merge code considers all uninvert-able fields to be doc values?
> It's because it considers all fields where (FieldInfo.docValuesType !=
> DocValuesType.NONE) to be doc values. FieldInfo objects are provided by
> IndexReader and as we always have UninvertingReader in chain of readers then
> we always get FieldInfo.docValuesType to be doc values type to which that
> field can be converted. Thus, we almost always have (FieldInfo.docValuesType
> != DocValuesType.NONE) and uninvert almost all fields.
>
> Are there a way to create core without UninvertingReader in chain of
> readers? If so then is it expected way of usage or just workaround?
>
> If loading core without UninvertingReader is not what I meant to do then I
> would suggest to consult schema to find out what fields are doc values
> instead of relying on FieldInfo.docValuesType.
>
> Thank you in advance. Looking forward to your replies!
>
> --
> Regards.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9712) Saner default for maxWarmingSearchers

2016-11-01 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-9712:
---

 Summary: Saner default for maxWarmingSearchers
 Key: SOLR-9712
 URL: https://issues.apache.org/jira/browse/SOLR-9712
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: search
Reporter: Shalin Shekhar Mangar
 Fix For: master (7.0), 6.4


As noted in SOLR-9710, the default for maxWarmingSearchers is Integer.MAX_VALUE 
which is just crazy. Let's have a saner default. Today we log a performance 
warning when the number of on deck searchers goes over 1. What if we had the 
default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers

2016-11-01 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627837#comment-15627837
 ] 

Mikhail Khludnev commented on SOLR-9712:


bq. What if we had the default as 1 that expert users can increase if needed?
update log will leak SOLR-7115

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_102) - Build # 6220 - Still Unstable!

2016-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6220/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.admin.CoreAdminHandlerTest.testDeleteInstanceDirAfterCreateFailure

Error Message:
The data directory was not cleaned up on unload after a failed core reload

Stack Trace:
java.lang.AssertionError: The data directory was not cleaned up on unload after 
a failed core reload
at 
__randomizedtesting.SeedInfo.seed([9930BE59CD312D62:E2F91C15EE13FF63]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.admin.CoreAdminHandlerTest.testDeleteInstanceDirAfterCreateFailure(CoreAdminHandlerTest.java:334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 2 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.CoreAdminHandlerTest
   [junit4]   2> Creating dataDir: 
C:\User