[jira] [Commented] (SOLR-11883) NPE on missing nested query in QueryValueSource

2019-02-18 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771652#comment-16771652
 ] 

Mikhail Khludnev commented on SOLR-11883:
-

absolutely.

> NPE on missing nested query in QueryValueSource
> ---
>
> Key: SOLR-11883
> URL: https://issues.apache.org/jira/browse/SOLR-11883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-11883.patch, SOLR-11883.patch, SOLR-11883.patch, 
> SOLR-11883.patch
>
>
> When the nested query or query de-referencing is used but the query isn't 
> specified Solr throws NPE.
> For following request, 
> {code:java}
> http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*&boost=query($qq)&defType=edismax
> {code}
> Solr returned 500 with stack trace
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
>   at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handl

[jira] [Updated] (SOLR-9882) exceeding timeAllowed causes ClassCastException: BasicResultContext cannot be cast to SolrDocumentList

2019-02-18 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9882:
---
Summary: exceeding timeAllowed causes ClassCastException: 
BasicResultContext cannot be cast to SolrDocumentList  (was: 
ClassCastException: BasicResultContext cannot be cast to SolrDocumentList)

> exceeding timeAllowed causes ClassCastException: BasicResultContext cannot be 
> cast to SolrDocumentList
> --
>
> Key: SOLR-9882
> URL: https://issues.apache.org/jira/browse/SOLR-9882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Yago Riveiro
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-9882-7987.patch, SOLR-9882.patch, SOLR-9882.patch, 
> SOLR-9882.patch
>
>
> After talk with [~yo...@apache.org] in the mailing list I open this Jira 
> ticket
> I'm hitting this bug in Solr 6.3.0.
> null:java.lang.ClassCastException:
> org.apache.solr.response.BasicResultContext cannot be cast to
> org.apache.solr.common.SolrDocumentList
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:315)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:169)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7414) CSVResponseWriter returns empty field when fl alias is combined with '*' selector

2019-02-18 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771635#comment-16771635
 ] 

Lucene/Solr QA commented on SOLR-7414:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m  6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} Validate source patterns {color} | 
{color:red}  1m  6s{color} | {color:red} Validate source patterns 
validate-source-patterns failed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} extraction in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 50s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.LeaderTragicEventTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-7414 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959081/SOLR-7414.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 97875af |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_191 |
| Validate source patterns | 
https://builds.apache.org/job/PreCommit-SOLR-Build/306/artifact/out/patch-validate-source-patterns-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/306/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/306/testReport/ |
| modules | C: solr/contrib/extraction solr/core U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/306/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> CSVResponseWriter returns empty field when fl alias is combined with '*' 
> selector
> -
>
> Key: SOLR-7414
> URL: https://issues.apache.org/jira/browse/SOLR-7414
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Reporter: Michael Lawrence
>Priority: Major
> Attachments: SOLR-7414-old.patch, SOLR-7414.patch, SOLR-7414.patch, 
> SOLR-7414.patch
>
>
> Attempting to retrieve all fields while renaming one, e.g., "inStock" to 
> "stocked" (URL below), results in CSV output that has a column for "inStock" 
> (should be "stocked"), and the column has no values. 
> steps to reproduce using 5.1...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary 
> '[{ "id" : "aaa", "bar_i" : 7, "inStock" : true }, { "id" : "bbb", "bar_i" : 
> 7, "inStock" : false }, { "id" : "ccc", "bar_i" : 7, "inStock" : true }]'
> {"responseHeader":{"status":0,"QTime":730}}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=id,stocked:inStock&wt=csv'
> id,stocked
> aaa,true
> bbb,false
> ccc,true
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=*,stocked:inStock&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=stocked:inStock,*&wt=csv'
> bar_i,id,_version_,

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-13-ea+8) - Build # 3562 - Unstable!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3562/
Java: 64bit/jdk-13-ea+8 -XX:+UseCompressedOops -XX:+UseParallelGC

9 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.OverseerTest

Error Message:
SOLR-11606: ByteBuddy used by Mockito is not working with this JVM version.

Stack Trace:
org.junit.AssumptionViolatedException: SOLR-11606: ByteBuddy used by Mockito is 
not working with this JVM version.
at __randomizedtesting.SeedInfo.seed([4512D11D6DD83825]:0)
at 
com.carrotsearch.randomizedtesting.RandomizedTest.assumeNoException(RandomizedTest.java:742)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:362)
at org.apache.solr.cloud.OverseerTest.beforeClass(OverseerTest.java:284)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: java.lang.IllegalArgumentException: Unknown Java version: 13
at 
net.bytebuddy.ClassFileVersion.ofJavaVersion(ClassFileVersion.java:210)
at 
net.bytebuddy.ClassFileVersion$VersionLocator$ForJava9CapableVm.locate(ClassFileVersion.java:462)
at net.bytebuddy.ClassFileVersion.ofThisVm(ClassFileVersion.java:223)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:360)
... 24 more


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.OverseerTest

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([4512D11D6DD83825]:0)
at org.apache.solr.cloud.OverseerTest.afterClass(OverseerTest.java:307)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:901)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.

[JENKINS] Lucene-Solr-NightlyTests-8.0 - Build # 2 - Still Unstable

2019-02-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.0/2/

5 tests failed.
FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv

Error Message:
Some docs had errors -- check logs expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some docs had errors -- check logs expected:<0> but 
was:<1>
at 
__randomizedtesting.SeedInfo.seed([9FB8D74B1170A97D:A9ACB50D9B2D936C]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:352)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv(TestStressCloudBlindAtomicUpdates.java:207)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Err

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 1041 - Unstable!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/1041/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at 
__randomizedtesting.SeedInfo.seed([214E1A8015356DA7:19FD3E7E32C6B976]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:622)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:278)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:206)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:198)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.setupCluster(AutoAddReplicasIntegrationTest.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.RuntimeException: Jetty/Solr unresponsi

[GitHub] moshebla edited a comment on issue #573: WIP: SOLR-13150

2019-02-18 Thread GitBox
moshebla edited a comment on issue #573: WIP: SOLR-13150
URL: https://github.com/apache/lucene-solr/pull/573#issuecomment-464996450
 
 
   Rebased on SOLR-13131.
   Updated the newly added tests to include maxCardinality in the createAlias 
cmd, since it is now a required param and caused mentioned tests to fail.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] moshebla commented on issue #573: WIP: SOLR-13150

2019-02-18 Thread GitBox
moshebla commented on issue #573: WIP: SOLR-13150
URL: https://github.com/apache/lucene-solr/pull/573#issuecomment-464996450
 
 
   Rebased on SOLR-13131.
   Updated the added test to include maxCardinality in the createAlias cmd, 
since it is now a required param and caused the newly added tests to fail.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] moshebla commented on a change in pull request #573: WIP: SOLR-13150

2019-02-18 Thread GitBox
moshebla commented on a change in pull request #573: WIP: SOLR-13150
URL: https://github.com/apache/lucene-solr/pull/573#discussion_r257896844
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/cloud/api/collections/CategoryRoutedAlias.java
 ##
 @@ -125,13 +146,13 @@ String buildCollectionNameFromValue(String value) {
 
   @Override
   public String createCollectionsIfRequired(AddUpdateCommand cmd) {
+assert 
!getCollectionList(this.parsedAliases).contains(buildCollectionNameFromValue(String.valueOf(cmd.getSolrInputDocument().getFieldValue(getRouteField();
 
 Review comment:
   Oh sorry I missed that subtle nuance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11883) NPE on missing nested query in QueryValueSource

2019-02-18 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771556#comment-16771556
 ] 

Munendra S N commented on SOLR-11883:
-

 [^SOLR-11883.patch] 
[~mkhludnev]
This patch handles parse failures for float, int and double. I have also 
handled parseId case.
For null check, i thought of adding the check to *parseArg*, but there are 
ValueSourceParser which handles returned null value when *parseArg* is called.
Could you please review these latest changes?


> NPE on missing nested query in QueryValueSource
> ---
>
> Key: SOLR-11883
> URL: https://issues.apache.org/jira/browse/SOLR-11883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-11883.patch, SOLR-11883.patch, SOLR-11883.patch, 
> SOLR-11883.patch
>
>
> When the nested query or query de-referencing is used but the query isn't 
> specified Solr throws NPE.
> For following request, 
> {code:java}
> http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*&boost=query($qq)&defType=edismax
> {code}
> Solr returned 500 with stack trace
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
>   at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.

[JENKINS] Lucene-Solr-Tests-8.x - Build # 39 - Unstable

2019-02-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/39/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderTragicEventTest.test

Error Message:
Failed while waiting for active collection Timeout waiting to see state for 
collection=collection1 :null Live Nodes: [127.0.0.1:33416_solr, 
127.0.0.1:36487_solr] Last available state: null

Stack Trace:
java.lang.RuntimeException: Failed while waiting for active collection
Timeout waiting to see state for collection=collection1 :null
Live Nodes: [127.0.0.1:33416_solr, 127.0.0.1:36487_solr]
Last available state: null
at 
__randomizedtesting.SeedInfo.seed([6AF5F1D0BBC22019:E2A1CE0A153E4DE1]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForActiveCollection(MiniSolrCloudCluster.java:728)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForActiveCollection(MiniSolrCloudCluster.java:734)
at 
org.apache.solr.cloud.LeaderTragicEventTest.test(LeaderTragicEventTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)

[jira] [Updated] (SOLR-11883) NPE on missing nested query in QueryValueSource

2019-02-18 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-11883:

Attachment: SOLR-11883.patch

> NPE on missing nested query in QueryValueSource
> ---
>
> Key: SOLR-11883
> URL: https://issues.apache.org/jira/browse/SOLR-11883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-11883.patch, SOLR-11883.patch, SOLR-11883.patch, 
> SOLR-11883.patch
>
>
> When the nested query or query de-referencing is used but the query isn't 
> specified Solr throws NPE.
> For following request, 
> {code:java}
> http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*&boost=query($qq)&defType=edismax
> {code}
> Solr returned 500 with stack trace
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
>   at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:530)
>   at org.eclipse.jet

[GitHub] gus-asf commented on a change in pull request #573: WIP: SOLR-13150

2019-02-18 Thread GitBox
gus-asf commented on a change in pull request #573: WIP: SOLR-13150
URL: https://github.com/apache/lucene-solr/pull/573#discussion_r257886945
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/cloud/api/collections/CategoryRoutedAlias.java
 ##
 @@ -128,7 +128,8 @@ public void validateRouteValue(AddUpdateCommand cmd) 
throws SolrException {
 
 if (cols.stream()
 .filter(x -> !x.contains(UNINITIALIZED)).count() >= 
Integer.valueOf(maxCardinality)) {
-  throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "max 
cardinality can not be exceeded for a Category Routed Alias: " + 
maxCardinality);
+  throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Max 
cardinality " + maxCardinality
 
 Review comment:
   Yeah this message is clearer and friendlier :).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11883) NPE on missing nested query in QueryValueSource

2019-02-18 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-11883:

Attachment: SOLR-11883.patch

> NPE on missing nested query in QueryValueSource
> ---
>
> Key: SOLR-11883
> URL: https://issues.apache.org/jira/browse/SOLR-11883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-11883.patch, SOLR-11883.patch, SOLR-11883.patch
>
>
> When the nested query or query de-referencing is used but the query isn't 
> specified Solr throws NPE.
> For following request, 
> {code:java}
> http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*&boost=query($qq)&defType=edismax
> {code}
> Solr returned 500 with stack trace
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
>   at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:530)
>   at org.eclipse.jetty.server.HttpChannel

[GitHub] gus-asf commented on a change in pull request #573: WIP: SOLR-13150

2019-02-18 Thread GitBox
gus-asf commented on a change in pull request #573: WIP: SOLR-13150
URL: https://github.com/apache/lucene-solr/pull/573#discussion_r257885303
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/cloud/api/collections/CategoryRoutedAlias.java
 ##
 @@ -117,15 +117,15 @@ public void validateRouteValue(AddUpdateCommand cmd) 
throws SolrException {
 
 String dataValue = 
String.valueOf(cmd.getSolrInputDocument().getFieldValue(getRouteField()));
 String candidateCollectionName = buildCollectionNameFromValue(dataValue);
-List colls = getCollectionList(this.parsedAliases);
+List cols = getCollectionList(this.parsedAliases);
 
-if (colls.contains(candidateCollectionName)) {
+if (cols.contains(candidateCollectionName)) {
   return;
 }
 
-if (colls.stream()
+if (cols.stream()
 .filter(x -> !x.contains(UNINITIALIZED)).count() >= 
Integer.valueOf(maxCardinality)) {
-  throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "max 
cardinality can not be exceeded for a Category Routed Alias: " + 
maxCardinality);
 
 Review comment:
   Actually, Bad request is better. The server is not broken, it is correctly 
refusing data that it is configured to refuse. I'll tweak


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11883) NPE on missing nested query in QueryValueSource

2019-02-18 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771547#comment-16771547
 ] 

Munendra S N commented on SOLR-11883:
-

 [^SOLR-11883.patch] 
Updated the patch to handle the case specified *v* is empty or contains only 
spaces(basically query resolves to null)

> NPE on missing nested query in QueryValueSource
> ---
>
> Key: SOLR-11883
> URL: https://issues.apache.org/jira/browse/SOLR-11883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-11883.patch, SOLR-11883.patch, SOLR-11883.patch
>
>
> When the nested query or query de-referencing is used but the query isn't 
> specified Solr throws NPE.
> For following request, 
> {code:java}
> http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*&boost=query($qq)&defType=edismax
> {code}
> Solr returned 500 with stack trace
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
>   at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.han

[GitHub] gus-asf commented on a change in pull request #573: WIP: SOLR-13150

2019-02-18 Thread GitBox
gus-asf commented on a change in pull request #573: WIP: SOLR-13150
URL: https://github.com/apache/lucene-solr/pull/573#discussion_r257879138
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/cloud/api/collections/CategoryRoutedAlias.java
 ##
 @@ -125,13 +146,13 @@ String buildCollectionNameFromValue(String value) {
 
   @Override
   public String createCollectionsIfRequired(AddUpdateCommand cmd) {
+assert 
!getCollectionList(this.parsedAliases).contains(buildCollectionNameFromValue(String.valueOf(cmd.getSolrInputDocument().getFieldValue(getRouteField();
 
 Review comment:
   This assert is not correct... the method is createCollections--if--required. 
meaning it might not be required (because it might already exist) I'll remove 
this. (and add some nice javadoc to make this clearer :) )


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] gus-asf commented on a change in pull request #573: WIP: SOLR-13150

2019-02-18 Thread GitBox
gus-asf commented on a change in pull request #573: WIP: SOLR-13150
URL: https://github.com/apache/lucene-solr/pull/573#discussion_r257879138
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/cloud/api/collections/CategoryRoutedAlias.java
 ##
 @@ -125,13 +146,13 @@ String buildCollectionNameFromValue(String value) {
 
   @Override
   public String createCollectionsIfRequired(AddUpdateCommand cmd) {
+assert 
!getCollectionList(this.parsedAliases).contains(buildCollectionNameFromValue(String.valueOf(cmd.getSolrInputDocument().getFieldValue(getRouteField();
 
 Review comment:
   This assert is not correct... the method is createCollections--if--required. 
meaning it might not be required (because it might already exist) I'll remove 
this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13149) Implement a basic CategoryRoutedAlias class

2019-02-18 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771519#comment-16771519
 ] 

Gus Heck commented on SOLR-13149:
-

Realized the contains check wasn't really a per doc thing anyway so left it 
alone. Wrote some more tests, including one to document the current state of 
support for non-english text, which by chance nicely uncovered a deadlock :). 
Fixed that. I think this ticket is done now.

> Implement a basic CategoryRoutedAlias class
> ---
>
> Key: SOLR-13149
> URL: https://issues.apache.org/jira/browse/SOLR-13149
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Reporter: Gus Heck
>Priority: Major
>
> This ticket will add the core functionality for data driven routing to 
> collections in an alias based on the value of a particular field by fleshing 
> out the methods required by the RoutedAlias interface. This ticket will also 
> look for any synergies with the existing TimeRoutedAlias class and reuse code 
> if possible. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] tigerquoll commented on issue #577: SOLR-13260: 128 bit integer type - longlong

2019-02-18 Thread GitBox
tigerquoll commented on issue #577: SOLR-13260: 128 bit integer type - longlong
URL: https://github.com/apache/lucene-solr/pull/577#issuecomment-464960956
 
 
   Hi Toke, thanks for your comments. Its great to have someone with a bit more 
experience with the SOLR code base involved.  In answer to the question of why? 
 128 bit int are required to support a ipv6 field type 
(https://issues.apache.org/jira/browse/SOLR-6741).  I started off adding 
support for generic 128 bit field types as part of that jira, but ended up 
splitting it off into a separate commit as the basic type was more the capable 
of standing on its own, and somebody else may find the functionality useful.
   
   So if we want ipv6, we need a 128 bits.  I guess it is a valid question on 
whether we should expose a generic 128 bit type to anything else - I'm open to 
suggestions about what would be best here.
   
   In regards to supporting Solr functions - I agree - this is an area of 
significant concern for me as well.  Most (all?) field functions only support 
up to 64 bit types.  The tradeoff between implementation effort and end-user 
functionality for 128 bit point types is unlikely to be justifiable (and would 
likely involve performance tradeoffs).
   
   My concerns are primarily focussed on what is needed (and possibly useful) 
in supporting an ipv6 data type. If I can get away with doing something a count 
aggregation and nothing more I would call it a workable win.  Only exposing 
that functionality in the Ipv6 type make help to reduce end user confusion.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+8) - Build # 186 - Unstable!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/186/
Java: 64bit/jdk-13-ea+8 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.OverseerTest

Error Message:
SOLR-11606: ByteBuddy used by Mockito is not working with this JVM version.

Stack Trace:
org.junit.AssumptionViolatedException: SOLR-11606: ByteBuddy used by Mockito is 
not working with this JVM version.
at __randomizedtesting.SeedInfo.seed([645EA7DAA2CD6F43]:0)
at 
com.carrotsearch.randomizedtesting.RandomizedTest.assumeNoException(RandomizedTest.java:742)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:365)
at org.apache.solr.cloud.OverseerTest.beforeClass(OverseerTest.java:284)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: java.lang.IllegalArgumentException: Unknown Java version: 13
at 
net.bytebuddy.ClassFileVersion.ofJavaVersion(ClassFileVersion.java:210)
at 
net.bytebuddy.ClassFileVersion$VersionLocator$ForJava9CapableVm.locate(ClassFileVersion.java:462)
at net.bytebuddy.ClassFileVersion.ofThisVm(ClassFileVersion.java:223)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:363)
... 24 more


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.OverseerTest

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([645EA7DAA2CD6F43]:0)
at org.apache.solr.cloud.OverseerTest.afterClass(OverseerTest.java:307)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:901)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdap

[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 45 - Unstable!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/45/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:
--> http://127.0.0.1:63370/collection1_shard2_replica_n2:Failed to execute 
sqlQuery 'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, field_l_p 
from collection1 where (text='()' OR text='') AND text='' order by 
field_i desc' against JDBC connection 'jdbc:calcitesolr:'. Error while 
executing SQL "select id, field_i, str_s, field_i_p, field_f_p, field_d_p, 
field_l_p from collection1 where (text='()' OR text='') AND text='' 
order by field_i desc": java.io.IOException: 
java.util.concurrent.ExecutionException: java.io.IOException: --> 
http://127.0.0.1:63390/collection1_shard2_replica_n5/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
 must have DocValues to use this feature.

Stack Trace:
java.io.IOException: --> 
http://127.0.0.1:63370/collection1_shard2_replica_n2:Failed to execute sqlQuery 
'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, field_l_p from 
collection1 where (text='()' OR text='') AND text='' order by 
field_i desc' against JDBC connection 'jdbc:calcitesolr:'.
Error while executing SQL "select id, field_i, str_s, field_i_p, field_f_p, 
field_d_p, field_l_p from collection1 where (text='()' OR text='') AND 
text='' order by field_i desc": java.io.IOException: 
java.util.concurrent.ExecutionException: java.io.IOException: --> 
http://127.0.0.1:63390/collection1_shard2_replica_n5/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
 must have DocValues to use this feature.
at 
__randomizedtesting.SeedInfo.seed([F7074E4E7BEA33AC:5043F6EA16512015]:0)
at 
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:215)
at 
org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2617)
at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:145)
at org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRul

[jira] [Commented] (SOLR-12992) Avoid creating Strings from BytesRef in ExportWriter

2019-02-18 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771500#comment-16771500
 ] 

Noble Paul commented on SOLR-12992:
---

We should fix this. 7.7.1. The patch is uploaded in SOLR-13255

> Avoid creating Strings from BytesRef in ExportWriter 
> -
>
> Key: SOLR-12992
> URL: https://issues.apache.org/jira/browse/SOLR-12992
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.7
>
> Attachments: SOLR-12992.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12992) Avoid creating Strings from BytesRef in ExportWriter

2019-02-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771489#comment-16771489
 ] 

Tomás Fernández Löbbe commented on SOLR-12992:
--

So, is the plan to fix the compatibility issue for 7.7.1? Or revert? What about 
8.0?

> Avoid creating Strings from BytesRef in ExportWriter 
> -
>
> Key: SOLR-12992
> URL: https://issues.apache.org/jira/browse/SOLR-12992
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.7
>
> Attachments: SOLR-12992.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-13248.
--
   Resolution: Fixed
 Assignee: Shalin Shekhar Mangar
Fix Version/s: master (9.0)

[~romseygeek] -- this is ready to go.

> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.0-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 194 - Failure!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.0-Linux/194/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 2636 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190219_011750_9739926748381626948083.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] # To suppress the following error report, specify this argument
   [junit4] # after -XX: or in .hotspotrc:  SuppressErrorAt=/split_if.cpp:322
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error 
(/home/buildbot/worker/jdk12u-linux/build/src/hotspot/share/opto/split_if.cpp:322),
 pid=20957, tid=21005
   [junit4] #  assert(prior_n->is_Region()) failed: must be a post-dominating 
merge point
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (12.0) (fastdebug build 
12-testing+0-builds.shipilev.net-openjdk-jdk12-b109-20190215-jdk-1229)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (fastdebug 
12-testing+0-builds.shipilev.net-openjdk-jdk12-b109-20190215-jdk-1229, mixed 
mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x17da240]  PhaseIdealLoop::spinup(Node*, Node*, 
Node*, Node*, Node*, small_cache*) [clone .part.43]+0x330
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/analysis/common/test/J1/hs_err_pid20957.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/analysis/common/test/J1/replay_pid20957.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] Current thread is 21005
   [junit4] Dumping core ...
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190219_011750_97316746999678878317932.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: increase O_BUFLEN in ostream.hpp 
-- output truncated
   [junit4] <<< JVM J1: EOF 

[...truncated 705 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-12-ea+shipilev-fastdebug/bin/java 
-XX:+UseCompressedOops -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-8.0-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=B23EF45F9F107C2A 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-8.0-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/analysis/common/test/J1
 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/analysis/common/test/temp
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 
/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/analysis/common/classes/test:/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/test-framework/classes/java:/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/codecs/classes/java:/home/jenkins/workspace/Lucene-Solr-8.0-Linux/lucene/build/core/classes/java:/h

[jira] [Commented] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771473#comment-16771473
 ] 

ASF subversion and git services commented on SOLR-13248:


Commit 884a70d0b7bb009e775702db0b3fe7c509b9ac02 in lucene-solr's branch 
refs/heads/branch_8_0 from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=884a70d ]

SOLR-13248: Adding upgrade notes which explain the problem and the mitigation 
as well as steps to revert to the old behavior

(cherry picked from commit 97875af3f93477f48e4ead1979b2f36797106e06)


> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771472#comment-16771472
 ] 

ASF subversion and git services commented on SOLR-13248:


Commit 521701dd9cf8ff20a02bb25f9512f17247840eed in lucene-solr's branch 
refs/heads/branch_8_0 from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=521701d ]

SOLR-13248: Autoscaling based replica placement is broken out of the box.

Solr 7.5 enabled autoscaling based replica placement by default but in the 
absence of default cluster policies, autoscaling can place more than 1 replica 
of the  same shard on the same node. Also, the maxShardsPerNode and 
createNodeSet was not respected. Due to these reasons,  this issue reverts the 
default replica placement policy to the 'legacy' assignment policy that was the 
default until Solr 7.4.

Cherry-picked from commit 7ede4e2b


> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13231) async CREATE collection request doesn't fail or cleanup when the request fails

2019-02-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-13231.
--
Resolution: Duplicate
  Assignee: Tomás Fernández Löbbe

Resolving as duplicate

> async CREATE collection request doesn't fail or cleanup when the request fails
> --
>
> Key: SOLR-13231
> URL: https://issues.apache.org/jira/browse/SOLR-13231
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a CREATE collection command is issued with an async ID, the request is 
> marked as "completed" regardless of the output of the create. Also related, 
> the ClusterState is not cleaned up in the same way as a in the sync request 
> case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12708) Async collection actions should not hide failures

2019-02-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771463#comment-16771463
 ] 

Tomás Fernández Löbbe commented on SOLR-12708:
--

I updated the PR to make the back compat part obvious. I'll run tests tonight 
and merge tomorrow if I see no issues.

> Async collection actions should not hide failures
> -
>
> Key: SOLR-12708
> URL: https://issues.apache.org/jira/browse/SOLR-12708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Backup/Restore
>Affects Versions: 7.4
>Reporter: Mano Kovacs
>Assignee: Tomás Fernández Löbbe
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Async collection API may hide failures compared to sync version. 
> [OverseerCollectionMessageHandler::processResponses|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java#L744]
>  structures errors differently in the response, that hides failures from most 
> evaluators. RestoreCmd did not receive, nor handle async addReplica issues.
> Sample create collection sync and async result with invalid solrconfig.xml:
> {noformat}
> {
> "responseHeader":{
> "status":0,
> "QTime":32104},
> "failure":{
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard1_replica_n1': Unable to create core [name4_shard1_replica_n1] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard2_replica_n2': Unable to create core [name4_shard2_replica_n2] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard1_replica_n2': Unable to create core [name4_shard1_replica_n2] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard2_replica_n1': Unable to create core [name4_shard2_replica_n1] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup."}
> }
> {noformat}
> vs async:
> {noformat}
> {
> "responseHeader":{
> "status":0,
> "QTime":3},
> "success":{
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":12}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":3}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":11}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":12}}},
> "myTaskId2709146382836":{
> "responseHeader":{
> "status":0,
> "QTime":1},
> "STATUS":"failed",
> "Response":"Error CREATEing SolrCore 'name_shard2_replica_n2': Unable to 
> create core [name_shard2_replica_n2] Caused by: The content of elements must 
> consist of well-formed character data or markup."},
> "status":{
> "state":"completed",
> "msg":"found [myTaskId] in completed tasks"}}
> {noformat}
> Proposing adding failure node to the results, keeping backward compatible but 
> correct result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12708) Async collection actions should not hide failures

2019-02-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-12708:


Assignee: Tomás Fernández Löbbe  (was: Varun Thacker)

> Async collection actions should not hide failures
> -
>
> Key: SOLR-12708
> URL: https://issues.apache.org/jira/browse/SOLR-12708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Backup/Restore
>Affects Versions: 7.4
>Reporter: Mano Kovacs
>Assignee: Tomás Fernández Löbbe
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Async collection API may hide failures compared to sync version. 
> [OverseerCollectionMessageHandler::processResponses|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java#L744]
>  structures errors differently in the response, that hides failures from most 
> evaluators. RestoreCmd did not receive, nor handle async addReplica issues.
> Sample create collection sync and async result with invalid solrconfig.xml:
> {noformat}
> {
> "responseHeader":{
> "status":0,
> "QTime":32104},
> "failure":{
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard1_replica_n1': Unable to create core [name4_shard1_replica_n1] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard2_replica_n2': Unable to create core [name4_shard2_replica_n2] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard1_replica_n2': Unable to create core [name4_shard1_replica_n2] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard2_replica_n1': Unable to create core [name4_shard2_replica_n1] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup."}
> }
> {noformat}
> vs async:
> {noformat}
> {
> "responseHeader":{
> "status":0,
> "QTime":3},
> "success":{
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":12}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":3}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":11}},
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":12}}},
> "myTaskId2709146382836":{
> "responseHeader":{
> "status":0,
> "QTime":1},
> "STATUS":"failed",
> "Response":"Error CREATEing SolrCore 'name_shard2_replica_n2': Unable to 
> create core [name_shard2_replica_n2] Caused by: The content of elements must 
> consist of well-formed character data or markup."},
> "status":{
> "state":"completed",
> "msg":"found [myTaskId] in completed tasks"}}
> {noformat}
> Proposing adding failure node to the results, keeping backward compatible but 
> correct result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8696) TestGeo3DPoint.testGeo3DRelations failure

2019-02-18 Thread Karl Wright (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright reassigned LUCENE-8696:
---

Assignee: Karl Wright

> TestGeo3DPoint.testGeo3DRelations failure
> -
>
> Key: LUCENE-8696
> URL: https://issues.apache.org/jira/browse/LUCENE-8696
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
>
> Reproduce with:
> {code:java}
> ant test  -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=721195D0198A8470 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=sr-RS -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1{code}
> Error:
> {code:java}
>    [junit4] FAILURE 1.16s | TestGeo3DPoint.testGeo3DRelations <<<
>    [junit4]    > Throwable #1: java.lang.AssertionError: invalid hits for 
> shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.3439035240356338(77.01), 
> points={[[lat=2.4457272005608357E-47, 
> lon=0.017453291479645996([X=1.0009663787601641, Y=0.017471932090601616, 
> Z=2.448463612203698E-47])], [lat=2.4457272005608357E-47, 
> lon=0.8952476719156919([X=0.6260252093310985, Y=0.7812370940381473, 
> Z=2.448463612203698E-47])], [lat=2.4457272005608357E-47, 
> lon=0.6491968536639036([X=0.7974608400583222, Y=0.6052232384770843, 
> Z=2.448463612203698E-47])], [lat=-0.7718789008737459, 
> lon=0.9236607495528212([X=0.43181767034308555, Y=0.5714183775701452, 
> Z=-0.6971214014446648])]]}}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13255) LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin

2019-02-18 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771458#comment-16771458
 ] 

Noble Paul commented on SOLR-13255:
---

[~ahubold] do you have a test for this?

> LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin
> --
>
> Key: SOLR-13255
> URL: https://issues.apache.org/jira/browse/SOLR-13255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 7.7
>Reporter: Andreas Hubold
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0, 7.7.1
>
> Attachments: SOLR-13255.patch, SOLR-13255.patch
>
>
> 7.7 changed the object type of string field values that are passed to 
> UpdateRequestProcessor implementations from java.lang.String to 
> ByteArrayUtf8CharSequence. SOLR-12992 was mentioned on solr-user as cause.
> The LangDetectLanguageIdentifierUpdateProcessor still expects String values, 
> does not work for CharSequences, and logs warnings instead. For example:
> {noformat}
> 2019-02-14 13:14:47.537 WARN  (qtp802600647-19) [   x:studio] 
> o.a.s.u.p.LangDetectLanguageIdentifierUpdateProcessor Field name_tokenized 
> not a String value, not including in detection
> {noformat}
> I'm not sure, but there could be further places where the changed type for 
> string values needs to be handled. (Our custom UpdateRequestProcessor are 
> broken as well since 7.7 and it would be great to have a proper upgrade note 
> as part of the release notes)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12055) Enable async logging by default

2019-02-18 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771451#comment-16771451
 ] 

Erick Erickson edited comment on SOLR-12055 at 2/19/19 12:46 AM:
-

I made some changes to SolrTestCaseJ4 and StartupLoggingUtils, if anyone with 
deep knowledge 
can peek at them that would be good. Mainly

1> providing a logger shutdown method in StartupLoggingUtils.

2> SolrTestJ4.tearDownTestCases() now calls StartupLoggingUtils.shutdown().

There are two classes of test problems:

1> leaked objects that mention lmax.disruptor or similar. This should be fixed 
by just having the test case extend SolrTestJ4 rather than LuceneTestCase so 
the above shutdown method gets called.

2> Assertion errors that look at logged messages. See the changes to 
RequestLoggingTest for a model, but basically just loop a few times until the 
async logging gets flushed. I expect there'll be more test failures that pop 
out since any test that checks logged messages is subject to timing issues and 
I'm not at all sure 10 full runs is enough to flush them out. I'll investigate 
them when I see them. If anyone notices something that I don't see first, 
please ping me. If you're ambitious, try fixing it by looping a few times like 
in RequestLoggingTest. So far it only seems to take a few repeats at most to 
find the missed log message. I suspect under heavy load it may be more.

Regular Solr startup/shutdown doesn't seem affected at all.

[~markrmil...@gmail.com]: You made a comment a lng time ago about calling 
this out loudly. Are the changes
in upgrade notes sufficient? I left in the synchronous logging configs but 
commented out in the
log4j2 configs for people who want the synchronous logging back.

[~ctargett]  [~gerlowskija]: I know the 8.x ref guide hasn't been released, the 
doc changes are in solr-upgrade-notes.adoc if we need to take them out, this 
change is for 8.1.

The leaked objects appear to have no relation to anything except logging, so 
that hope is forlorn.

At this point:

1> all tests run, with the usual caveats

2> I've run the full test suite 10 times and don't see any mysterious failures

3> I've beasted TestLogWatcher SOLR-12732 1,000 times and no failures, so I'll 
close that JIRA too.

I'll probably push this late this week, complain now or hold your peace ;)


was (Author: erickerickson):
I made some changes to SolrTestCaseJ4 and StartupLoggingUtils, if anyone with 
deep knowledge 
can peek at them that would be good. Mainly

1> providing a logger shutdown method in StartupLoggingUtils.

2> SolrTestJ4.tearDownTestCases() now calls StartupLoggingUtils.shutdown().

There are two classes of test problems:

1> leaked objects that mention lmax.disruptor or similar. This should be fixed 
by just having the test case extend SolrTestJ4 rather than LuceneTestCase so 
the above shutdown method gets called.

2> Assertion errors that look at logged messages. See the changes to 
RequestLoggingTest for a model, but basically just loop a few times until the 
async logging gets flushed. I expect there'll be more test failures that pop 
out since any test that checks logged messages is subject to timing issues and 
I'm not at all sure 10 full runs is enough to flush them out. I'll investigate 
them when I see them. If anyone notices something that I don't see first, 
please ping me. If you're ambitious, try fixing it by looping a few times like 
in RequestLoggingTest. So far it only seems to take a few repeats at most to 
find the missed log message. I suspect under heavy load it may be more.

Regular Solr startup/shutdown doesn't seem affected at all.

@MarkMiller: You made a comment a lng time ago about calling this out 
loudly. Are the changes
in upgrade notes sufficient? I left in the synchronous logging configs but 
commented out in the
log4j2 configs for people who want the synchronous logging back.

@Cassandra: I know the 8.x ref guide hasn't been released, the doc changes are 
in solr-upgrade-notes.adoc if we need to take them out, this change is for 8.1.

The leaked objects appear to have no relation to anything except logging, so 
that hope is forlorn.

At this point:

1> all tests run, with the usual caveats

2> I've run the full test suite 10 times and don't see any mysterious failures

3> I've beasted TestLogWatcher SOLR-12732 1,000 times and no failures, so I'll 
close that JIRA too.

I'll probably push this late this week, complain now or hold your peace ;)

> Enable async logging by default
> ---
>
> Key: SOLR-12055
> URL: https://issues.apache.org/jira/browse/SOLR-12055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee:

[jira] [Updated] (SOLR-13255) LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin

2019-02-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13255:
--
Attachment: SOLR-13255.patch

> LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin
> --
>
> Key: SOLR-13255
> URL: https://issues.apache.org/jira/browse/SOLR-13255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 7.7
>Reporter: Andreas Hubold
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0, 7.7.1
>
> Attachments: SOLR-13255.patch, SOLR-13255.patch
>
>
> 7.7 changed the object type of string field values that are passed to 
> UpdateRequestProcessor implementations from java.lang.String to 
> ByteArrayUtf8CharSequence. SOLR-12992 was mentioned on solr-user as cause.
> The LangDetectLanguageIdentifierUpdateProcessor still expects String values, 
> does not work for CharSequences, and logs warnings instead. For example:
> {noformat}
> 2019-02-14 13:14:47.537 WARN  (qtp802600647-19) [   x:studio] 
> o.a.s.u.p.LangDetectLanguageIdentifierUpdateProcessor Field name_tokenized 
> not a String value, not including in detection
> {noformat}
> I'm not sure, but there could be further places where the changed type for 
> string values needs to be handled. (Our custom UpdateRequestProcessor are 
> broken as well since 7.7 and it would be great to have a proper upgrade note 
> as part of the release notes)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13255) LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin

2019-02-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13255:
--
Attachment: (was: SOLR-13255.patch)

> LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin
> --
>
> Key: SOLR-13255
> URL: https://issues.apache.org/jira/browse/SOLR-13255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 7.7
>Reporter: Andreas Hubold
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0, 7.7.1
>
> Attachments: SOLR-13255.patch
>
>
> 7.7 changed the object type of string field values that are passed to 
> UpdateRequestProcessor implementations from java.lang.String to 
> ByteArrayUtf8CharSequence. SOLR-12992 was mentioned on solr-user as cause.
> The LangDetectLanguageIdentifierUpdateProcessor still expects String values, 
> does not work for CharSequences, and logs warnings instead. For example:
> {noformat}
> 2019-02-14 13:14:47.537 WARN  (qtp802600647-19) [   x:studio] 
> o.a.s.u.p.LangDetectLanguageIdentifierUpdateProcessor Field name_tokenized 
> not a String value, not including in detection
> {noformat}
> I'm not sure, but there could be further places where the changed type for 
> string values needs to be handled. (Our custom UpdateRequestProcessor are 
> broken as well since 7.7 and it would be great to have a proper upgrade note 
> as part of the release notes)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13249) ByteArrayUtf8CharSequence.getStringOrNull returns null

2019-02-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13249:
--
Attachment: (was: SOLR-13255.patch)

> ByteArrayUtf8CharSequence.getStringOrNull returns null 
> ---
>
> Key: SOLR-13249
> URL: https://issues.apache.org/jira/browse/SOLR-13249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7
>Reporter: Markus Jelsma
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 8.0
>
>
> I have an URP that, in processAdd(), gets a field value via 
> SolrInputField.getValue(). In a normal unit test this yields me a String. But 
> in a distributed test i get a ByteArrayUtf8CharSequence.
> If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
> returns null unless some internal method called _getStr first.
> This is either by design or a mistake. If it is a mistake, then the fix is to 
> use toString() and the getStringOrNull() method can be removed (it would 
> become a duplicate for toString(). If it is by design, then nothing is 
> obvious from the JavaDoc and it should clarify.
> This is since 7.7.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13249) ByteArrayUtf8CharSequence.getStringOrNull returns null

2019-02-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13249:
--
Attachment: SOLR-13255.patch

> ByteArrayUtf8CharSequence.getStringOrNull returns null 
> ---
>
> Key: SOLR-13249
> URL: https://issues.apache.org/jira/browse/SOLR-13249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 8.0
>
> Attachments: SOLR-13255.patch
>
>
> I have an URP that, in processAdd(), gets a field value via 
> SolrInputField.getValue(). In a normal unit test this yields me a String. But 
> in a distributed test i get a ByteArrayUtf8CharSequence.
> If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
> returns null unless some internal method called _getStr first.
> This is either by design or a mistake. If it is a mistake, then the fix is to 
> use toString() and the getStringOrNull() method can be removed (it would 
> become a duplicate for toString(). If it is by design, then nothing is 
> obvious from the JavaDoc and it should clarify.
> This is since 7.7.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12992) Avoid creating Strings from BytesRef in ExportWriter

2019-02-18 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771425#comment-16771425
 ] 

Noble Paul commented on SOLR-12992:
---

All the normal calls are always expected to return normal java Strings. So , it 
was supposed to be backward compatible. Every public API should behave exactly 
the same as it used to. 

> Avoid creating Strings from BytesRef in ExportWriter 
> -
>
> Key: SOLR-12992
> URL: https://issues.apache.org/jira/browse/SOLR-12992
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.7
>
> Attachments: SOLR-12992.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13249) ByteArrayUtf8CharSequence.getStringOrNull returns null

2019-02-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-13249:
-

Assignee: Noble Paul

> ByteArrayUtf8CharSequence.getStringOrNull returns null 
> ---
>
> Key: SOLR-13249
> URL: https://issues.apache.org/jira/browse/SOLR-13249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7
>Reporter: Markus Jelsma
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 8.0
>
> Attachments: SOLR-13255.patch
>
>
> I have an URP that, in processAdd(), gets a field value via 
> SolrInputField.getValue(). In a normal unit test this yields me a String. But 
> in a distributed test i get a ByteArrayUtf8CharSequence.
> If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
> returns null unless some internal method called _getStr first.
> This is either by design or a mistake. If it is a mistake, then the fix is to 
> use toString() and the getStringOrNull() method can be removed (it would 
> become a duplicate for toString(). If it is by design, then nothing is 
> obvious from the JavaDoc and it should clarify.
> This is since 7.7.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12992) Avoid creating Strings from BytesRef in ExportWriter

2019-02-18 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771422#comment-16771422
 ] 

Alan Woodward commented on SOLR-12992:
--

[~markrmil...@gmail.com] 7.7 was released a week ago, but I think there are 
plans for a 7.7.1.  Is this a blocker for 8.0 as well?  I was planning on 
building RC2 tomorrow

> Avoid creating Strings from BytesRef in ExportWriter 
> -
>
> Key: SOLR-12992
> URL: https://issues.apache.org/jira/browse/SOLR-12992
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.7
>
> Attachments: SOLR-12992.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13255) LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin

2019-02-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-13255:
-

Assignee: Noble Paul

> LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin
> --
>
> Key: SOLR-13255
> URL: https://issues.apache.org/jira/browse/SOLR-13255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 7.7
>Reporter: Andreas Hubold
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0, 7.7.1
>
> Attachments: SOLR-13255.patch
>
>
> 7.7 changed the object type of string field values that are passed to 
> UpdateRequestProcessor implementations from java.lang.String to 
> ByteArrayUtf8CharSequence. SOLR-12992 was mentioned on solr-user as cause.
> The LangDetectLanguageIdentifierUpdateProcessor still expects String values, 
> does not work for CharSequences, and logs warnings instead. For example:
> {noformat}
> 2019-02-14 13:14:47.537 WARN  (qtp802600647-19) [   x:studio] 
> o.a.s.u.p.LangDetectLanguageIdentifierUpdateProcessor Field name_tokenized 
> not a String value, not including in detection
> {noformat}
> I'm not sure, but there could be further places where the changed type for 
> string values needs to be handled. (Our custom UpdateRequestProcessor are 
> broken as well since 7.7 and it would be great to have a proper upgrade note 
> as part of the release notes)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13249) ByteArrayUtf8CharSequence.getStringOrNull returns null

2019-02-18 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771419#comment-16771419
 ] 

Noble Paul commented on SOLR-13249:
---

bq. gets a field value via SolrInputField.getValue(). In a normal unit test 
this yields me a String.

The fix should be to always return  a String value for  
SolrInputField.getValue(). 

> ByteArrayUtf8CharSequence.getStringOrNull returns null 
> ---
>
> Key: SOLR-13249
> URL: https://issues.apache.org/jira/browse/SOLR-13249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 8.0
>
>
> I have an URP that, in processAdd(), gets a field value via 
> SolrInputField.getValue(). In a normal unit test this yields me a String. But 
> in a distributed test i get a ByteArrayUtf8CharSequence.
> If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
> returns null unless some internal method called _getStr first.
> This is either by design or a mistake. If it is a mistake, then the fix is to 
> use toString() and the getStringOrNull() method can be removed (it would 
> become a duplicate for toString(). If it is by design, then nothing is 
> obvious from the JavaDoc and it should clarify.
> This is since 7.7.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 5057 - Unstable!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5057/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Timeout occurred while waiting response from server at: http://127.0.0.1:62044

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: http://127.0.0.1:62044
at 
__randomizedtesting.SeedInfo.seed([1EA7179169363519:96F3284BC7CA58E1]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:256)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:213)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:338)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1080)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
or

[jira] [Commented] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771413#comment-16771413
 ] 

ASF subversion and git services commented on SOLR-13248:


Commit 7ede4e2b430c7e0c36dedf21fcab35fe48ed783d in lucene-solr's branch 
refs/heads/branch_8x from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7ede4e2 ]

SOLR-13248: Autoscaling based replica placement is broken out of the box.

Solr 7.5 enabled autoscaling based replica placement by default but in the 
absence of default cluster policies, autoscaling can place more than 1 replica 
of the  same shard on the same node. Also, the maxShardsPerNode and 
createNodeSet was not respected. Due to these reasons,  this issue reverts the 
default replica placement policy to the 'legacy' assignment policy that was the 
default until Solr 7.4.

(cherry picked from commit 7e2d40197cb096fe0519652c2ebbbf38a70d0d65)


> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771414#comment-16771414
 ] 

ASF subversion and git services commented on SOLR-13248:


Commit d697b4a6ad3ac977a2890f57225293c17a44779f in lucene-solr's branch 
refs/heads/branch_8x from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d697b4a ]

SOLR-13248: Adding upgrade notes which explain the problem and the mitigation 
as well as steps to revert to the old behavior

(cherry picked from commit 97875af3f93477f48e4ead1979b2f36797106e06)


> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771401#comment-16771401
 ] 

ASF subversion and git services commented on SOLR-13248:


Commit 97875af3f93477f48e4ead1979b2f36797106e06 in lucene-solr's branch 
refs/heads/master from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=97875af ]

SOLR-13248: Adding upgrade notes which explain the problem and the mitigation 
as well as steps to revert to the old behavior


> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771395#comment-16771395
 ] 

ASF subversion and git services commented on SOLR-13248:


Commit 7e2d40197cb096fe0519652c2ebbbf38a70d0d65 in lucene-solr's branch 
refs/heads/master from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7e2d401 ]

SOLR-13248: Autoscaling based replica placement is broken out of the box.

Solr 7.5 enabled autoscaling based replica placement by default but in the 
absence of default cluster policies, autoscaling can place more than 1 replica 
of the  same shard on the same node. Also, the maxShardsPerNode and 
createNodeSet was not respected. Due to these reasons,  this issue reverts the 
default replica placement policy to the 'legacy' assignment policy that was the 
default until Solr 7.4.


> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8697) GraphTokenStreamFiniteStrings does not correctly handle gaps in the token graph

2019-02-18 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771386#comment-16771386
 ] 

Jim Ferenczi commented on LUCENE-8697:
--

The patch looks good. Note that while the patch fixes a real issue it doesn't 
solve the bug reported in https://issues.apache.org/jira/browse/LUCENE-8250. 
Since the issues are different, I am +1 to push this patch as is and to work on 
LUCENE-8250 in a follow up.

> GraphTokenStreamFiniteStrings does not correctly handle gaps in the token 
> graph
> ---
>
> Key: LUCENE-8697
> URL: https://issues.apache.org/jira/browse/LUCENE-8697
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8697.patch
>
>
> Currently, side-paths with gaps in can end up being missed entirely when 
> iterating through token streams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13256) Ref Guide: Upgrade Notes for 7.7

2019-02-18 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771382#comment-16771382
 ] 

Jason Gerlowski commented on SOLR-13256:


bq. Maybe it makes sense to hold the 7.7 Ref Guide until we figure out what is 
going to happen with those issues re: 7.7.1?

I think that makes sense. If there is going to be a 7.7.1 soon that we're going 
to be steering everyone towards anyways, there's no need to include this in the 
ref-guide.  If no one volunteers to do a 7.7.1 release soon and people are 
going to be using 7.7.0, then we can cross that bridge when we come to it.



(Thoughts below are only relevant if there is no 7.7.1 soon, and we need to 
cross the bridge of deciding whether to include Known Issues in our Upgrade 
Notes)

bq.  to date, we haven't mentioned Known Issues in the Upgrade Notes ... [this 
is] actually really hard for Solr ... What's the criteria for being included 
here? What about all the prior releases?

I'm not sure the slope is as slippery as it looks.  Yes, there are 1500 
unresolved Solr bugs, but only 8 specifically tagged as affecting 7.7.  And 
only 2 of those are being talked about as serious enough to trigger a bugfix 
release.  The number of "candidates-for-inclusion" drops to just a few pretty 
quickly.

If that's not convincing and your question about having guidelines/criteria 
wasn't rhetorical, let me offer a strawman for discussion: "Known Issues should 
only be included in the Upgrade Notes if they are generating discussion about 
an immediate bugfix release at the time the ref-guide release is being worked 
on".

> Ref Guide: Upgrade Notes for 7.7
> 
>
> Key: SOLR-13256
> URL: https://issues.apache.org/jira/browse/SOLR-13256
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13256.patch
>
>
> With 7.7 released and out the door, we should get the ball moving on a 7.7 
> ref-guide.  One of the prerequisites for that process is putting together 
> some upgrade notes that can go in 
> {{solr/solr-ref-guide/src/solr-upgrade-notes.adoc}} for users upgrading to 
> 7.7.
> I'm going to take a look at CHANGES and take a first pass at the "upgrading" 
> section for 7.7.  If anyone has anything they know should be in the list, 
> please let me know and I'll try to include it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9882) ClassCastException: BasicResultContext cannot be cast to SolrDocumentList

2019-02-18 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771357#comment-16771357
 ] 

Mikhail Khludnev commented on SOLR-9882:


I've took more invasive testing approach, I'll add one more stochastic as well 
as elegant test. So, far it covers: 
 - time limiting collector, but it's better to test with delaying post filter   
 - directory timeout at search, 
 - FSV (really tricky), 
 - old and new facets
IT should resolve may those '500 on timeout' issues. Opinions?   

> ClassCastException: BasicResultContext cannot be cast to SolrDocumentList
> -
>
> Key: SOLR-9882
> URL: https://issues.apache.org/jira/browse/SOLR-9882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Yago Riveiro
>Priority: Major
> Attachments: SOLR-9882-7987.patch, SOLR-9882.patch, SOLR-9882.patch, 
> SOLR-9882.patch
>
>
> After talk with [~yo...@apache.org] in the mailing list I open this Jira 
> ticket
> I'm hitting this bug in Solr 6.3.0.
> null:java.lang.ClassCastException:
> org.apache.solr.response.BasicResultContext cannot be cast to
> org.apache.solr.common.SolrDocumentList
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:315)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:169)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9882) ClassCastException: BasicResultContext cannot be cast to SolrDocumentList

2019-02-18 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-9882:
--

Assignee: Mikhail Khludnev

> ClassCastException: BasicResultContext cannot be cast to SolrDocumentList
> -
>
> Key: SOLR-9882
> URL: https://issues.apache.org/jira/browse/SOLR-9882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Yago Riveiro
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-9882-7987.patch, SOLR-9882.patch, SOLR-9882.patch, 
> SOLR-9882.patch
>
>
> After talk with [~yo...@apache.org] in the mailing list I open this Jira 
> ticket
> I'm hitting this bug in Solr 6.3.0.
> null:java.lang.ClassCastException:
> org.apache.solr.response.BasicResultContext cannot be cast to
> org.apache.solr.common.SolrDocumentList
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:315)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:169)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9882) ClassCastException: BasicResultContext cannot be cast to SolrDocumentList

2019-02-18 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9882:
---
Attachment: SOLR-9882.patch

> ClassCastException: BasicResultContext cannot be cast to SolrDocumentList
> -
>
> Key: SOLR-9882
> URL: https://issues.apache.org/jira/browse/SOLR-9882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Yago Riveiro
>Priority: Major
> Attachments: SOLR-9882-7987.patch, SOLR-9882.patch, SOLR-9882.patch, 
> SOLR-9882.patch
>
>
> After talk with [~yo...@apache.org] in the mailing list I open this Jira 
> ticket
> I'm hitting this bug in Solr 6.3.0.
> null:java.lang.ClassCastException:
> org.apache.solr.response.BasicResultContext cannot be cast to
> org.apache.solr.common.SolrDocumentList
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:315)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:169)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13261) Should SortableTextField be allowed in export?

2019-02-18 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771344#comment-16771344
 ] 

Erick Erickson commented on SOLR-13261:
---

This is at least in the general ballpark of SOLR-8362

> Should SortableTextField be allowed in export?
> --
>
> Key: SOLR-13261
> URL: https://issues.apache.org/jira/browse/SOLR-13261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Priority: Major
>
> ExportWriter (and perhaps other places) explicitly tests for certain field 
> types and error out with "Export fields must either be one of the following 
> types: int,float,long,double,string,date,boolean"
> It seems perfectly legal to export SortableTextField as well as it's a DV 
> field. How desirable that would be is an open question.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13261) Should SortableTextField be allowed in export?

2019-02-18 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-13261:
-

 Summary: Should SortableTextField be allowed in export?
 Key: SOLR-13261
 URL: https://issues.apache.org/jira/browse/SOLR-13261
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.7, 8.0, master (9.0)
Reporter: Erick Erickson


ExportWriter (and perhaps other places) explicitly tests for certain field 
types and error out with "Export fields must either be one of the following 
types: int,float,long,double,string,date,boolean"

It seems perfectly legal to export SortableTextField as well as it's a DV 
field. How desirable that would be is an open question.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1774 - Still Unstable

2019-02-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1774/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderTragicEventTest.test

Error Message:
Timeout waiting for new replica become leader Timeout waiting to see state for 
collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/6)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node2":{   "core":"collection1_shard1_replica_n1",   
"base_url":"http://127.0.0.1:45432/solr";,   
"node_name":"127.0.0.1:45432_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"collection1_shard1_replica_n3",   
"base_url":"http://127.0.0.1:34347/solr";,   
"node_name":"127.0.0.1:34347_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"} Live Nodes: [127.0.0.1:34347_solr, 127.0.0.1:45432_solr] 
Last available state: 
DocCollection(collection1//collections/collection1/state.json/6)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node2":{   "core":"collection1_shard1_replica_n1",   
"base_url":"http://127.0.0.1:45432/solr";,   
"node_name":"127.0.0.1:45432_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"collection1_shard1_replica_n3",   
"base_url":"http://127.0.0.1:34347/solr";,   
"node_name":"127.0.0.1:34347_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for new replica become leader
Timeout waiting to see state for collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/6)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node2":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"http://127.0.0.1:45432/solr";,
  "node_name":"127.0.0.1:45432_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"collection1_shard1_replica_n3",
  "base_url":"http://127.0.0.1:34347/solr";,
  "node_name":"127.0.0.1:34347_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
Live Nodes: [127.0.0.1:34347_solr, 127.0.0.1:45432_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/6)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node2":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"http://127.0.0.1:45432/solr";,
  "node_name":"127.0.0.1:45432_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"collection1_shard1_replica_n3",
  "base_url":"http://127.0.0.1:34347/solr";,
  "node_name":"127.0.0.1:34347_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([5E500FEE72A40B84:D6043034DC58667C]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:289)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:267)
at 
org.apache.solr.cloud.LeaderTragicEventTest.test(LeaderTragicEventTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.Ran

BadApple report, things are changing

2019-02-18 Thread Erick Erickson
things are settled down quite a bit. So ongoing I’ll publish this each week, 
but will only periodically change the annotations.

If/when we stop running 7x Jenkins jobs, I may start annotating with BadApple 
again, we’ll see.

Meanwhile I’ll post the list of new test failures over the last 4 weeks and 
attach the full report, but won’t change the source for a while.

Failures in the last 4 reports..
   Report   Pct runsfails   test
 0123   6.9  137 14  HdfsUnloadDistributedZkTest.test
 0123   3.0 1334 32  LeaderTragicEventTest.test
 0123   0.4 1306 11  MathExpressionTest.testGammaDistribution
 0123   1.5 1321 10  
MissingSegmentRecoveryTest.testLeaderRecovery
 0123   0.8 1315  6  OverseerRolesTest.testOverseerRole
 0123   0.4 1330 12  TestSimExtremeIndexing.testScaleUp
 Will BadApple all tests above this line except ones listed at the 
top**



e-mail-2019-02-18.txt
Description: application/applefile

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13256) Ref Guide: Upgrade Notes for 7.7

2019-02-18 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771316#comment-16771316
 ] 

Cassandra Targett commented on SOLR-13256:
--

I took a look at the patch - to date, we haven't mentioned Known Issues in the 
Upgrade Notes. I saw you had the one for the maxShardsPerNode problem in an 
earlier patch, but in this patch I see you've added another item for whatever 
it is wrong with whichever URP or whatever. 

It would be nice if we could do something for a Known Issues list, but it's 
actually really hard for Solr. There are over 1500 open "Bugs" in Jira, many of 
them open for a long, long time. What's the criteria for being included here? 
What about all the prior releases?

Maybe it makes sense to hold the 7.7 Ref Guide until we figure out what is 
going to happen with those issues re: 7.7.1? It seems they are both big enough 
to warrant a 7.7.1 release on their own, and if they are fixed we won't have to 
worry about explaining them, we can say as a community we recommend moving to 
7.7.1 since it resolves both of those issues.

> Ref Guide: Upgrade Notes for 7.7
> 
>
> Key: SOLR-13256
> URL: https://issues.apache.org/jira/browse/SOLR-13256
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13256.patch
>
>
> With 7.7 released and out the door, we should get the ball moving on a 7.7 
> ref-guide.  One of the prerequisites for that process is putting together 
> some upgrade notes that can go in 
> {{solr/solr-ref-guide/src/solr-upgrade-notes.adoc}} for users upgrading to 
> 7.7.
> I'm going to take a look at CHANGES and take a first pass at the "upgrading" 
> section for 7.7.  If anyone has anything they know should be in the list, 
> please let me know and I'll try to include it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2858 - Unstable

2019-02-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2858/

[...truncated 57 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-8.0/2/consoleText

[repro] Revision: fd1fc2637071347201df3ecede659c47a0414d9d

[repro] Repro line:  ant test  -Dtestcase=TestSimExtremeIndexing 
-Dtests.method=testScaleUp -Dtests.seed=58545D0D491426E4 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ar-SD -Dtests.timezone=Asia/Ust-Nera 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
6a0f7b251de104d9ce1dfa6b18821715929fe76b
[repro] git fetch
[repro] git checkout fd1fc2637071347201df3ecede659c47a0414d9d

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimExtremeIndexing
[repro] ant compile-test

[...truncated 3572 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestSimExtremeIndexing" -Dtests.showOutput=onerror  
-Dtests.seed=58545D0D491426E4 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ar-SD -Dtests.timezone=Asia/Ust-Nera -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 23585 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing
[repro] git checkout 6a0f7b251de104d9ce1dfa6b18821715929fe76b

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12992) Avoid creating Strings from BytesRef in ExportWriter

2019-02-18 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771281#comment-16771281
 ] 

Mark Miller commented on SOLR-12992:


We can't release Solr 7.7 with this breaking change. SOLR-13249 and SOLR-13255 
and large back compat break for update processors with no mention.

This difference between single node and cloud has to be addressed, the 
getString bug needs to be addressed, the breaking change for update processors 
needs to be addressed.

> Avoid creating Strings from BytesRef in ExportWriter 
> -
>
> Key: SOLR-12992
> URL: https://issues.apache.org/jira/browse/SOLR-12992
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.7
>
> Attachments: SOLR-12992.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-12885) BinaryResponseWriter (javabin format) should directly copy from Bytesref to output

2019-02-18 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-12885:


> BinaryResponseWriter (javabin format) should directly copy from Bytesref to 
> output
> --
>
> Key: SOLR-12885
> URL: https://issues.apache.org/jira/browse/SOLR-12885
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12885.patch, SOLR-12885.patch, SOLR-12885.patch, 
> SOLR-12885.patch, SOLR-12885.patch
>
>
> The format format in which bytes are stored in {{BytesRef}} and the javabin 
> string format are both the same. We don't need to convert the string/text 
> fields from {{BytesRef}} to String and back to UTF8 
> {{Now a String/Text field is read and written out as follows}}
> {{luceneindex(UTF8 bytes) --> UTF16 (char[]) --> new String() a copy of UTF16 
> char[] -->  UTF8bytes(javabin format)}}
> This does not add a new type to javabin. It's encoded as String in the 
> serialized data. When it is deserialized, you get a String back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13249) ByteArrayUtf8CharSequence.getStringOrNull returns null

2019-02-18 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771283#comment-16771283
 ] 

Mark Miller commented on SOLR-13249:


I reopened SOLR-12992. I would recommend we revert that for Solr 7.7. Single 
committer with no review, breaking changes, very small gain.

> ByteArrayUtf8CharSequence.getStringOrNull returns null 
> ---
>
> Key: SOLR-13249
> URL: https://issues.apache.org/jira/browse/SOLR-13249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 8.0
>
>
> I have an URP that, in processAdd(), gets a field value via 
> SolrInputField.getValue(). In a normal unit test this yields me a String. But 
> in a distributed test i get a ByteArrayUtf8CharSequence.
> If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
> returns null unless some internal method called _getStr first.
> This is either by design or a mistake. If it is a mistake, then the fix is to 
> use toString() and the getStringOrNull() method can be removed (it would 
> become a duplicate for toString(). If it is by design, then nothing is 
> obvious from the JavaDoc and it should clarify.
> This is since 7.7.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12992) Avoid creating Strings from BytesRef in ExportWriter

2019-02-18 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-12992:
---
Priority: Blocker  (was: Major)

> Avoid creating Strings from BytesRef in ExportWriter 
> -
>
> Key: SOLR-12992
> URL: https://issues.apache.org/jira/browse/SOLR-12992
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.7
>
> Attachments: SOLR-12992.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13249) ByteArrayUtf8CharSequence.getStringOrNull returns null

2019-02-18 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771279#comment-16771279
 ] 

Mark Miller edited comment on SOLR-13249 at 2/18/19 6:19 PM:
-

{quote}
 I have an URP that, in processAdd(), gets a field value via 
SolrInputField.getValue(). In a normal unit test this yields me a String. But 
in a distributed test i get a ByteArrayUtf8CharSequence.


{quote}
This is no good. We should not release 7.7 with this change.


was (Author: markrmil...@gmail.com):
{format}

I have an URP that, in processAdd(), gets a field value via 
SolrInputField.getValue(). In a normal unit test this yields me a String. But 
in a distributed test i get a ByteArrayUtf8CharSequence.

If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
returns null unless some internal method called _getStr first.

{format}

 

This is no good. We should not release 7.7 with this change.

> ByteArrayUtf8CharSequence.getStringOrNull returns null 
> ---
>
> Key: SOLR-13249
> URL: https://issues.apache.org/jira/browse/SOLR-13249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 8.0
>
>
> I have an URP that, in processAdd(), gets a field value via 
> SolrInputField.getValue(). In a normal unit test this yields me a String. But 
> in a distributed test i get a ByteArrayUtf8CharSequence.
> If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
> returns null unless some internal method called _getStr first.
> This is either by design or a mistake. If it is a mistake, then the fix is to 
> use toString() and the getStringOrNull() method can be removed (it would 
> become a duplicate for toString(). If it is by design, then nothing is 
> obvious from the JavaDoc and it should clarify.
> This is since 7.7.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13249) ByteArrayUtf8CharSequence.getStringOrNull returns null

2019-02-18 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771279#comment-16771279
 ] 

Mark Miller commented on SOLR-13249:


{format}

I have an URP that, in processAdd(), gets a field value via 
SolrInputField.getValue(). In a normal unit test this yields me a String. But 
in a distributed test i get a ByteArrayUtf8CharSequence.

If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
returns null unless some internal method called _getStr first.

{format}

 

This is no good. We should not release 7.7 with this change.

> ByteArrayUtf8CharSequence.getStringOrNull returns null 
> ---
>
> Key: SOLR-13249
> URL: https://issues.apache.org/jira/browse/SOLR-13249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 8.0
>
>
> I have an URP that, in processAdd(), gets a field value via 
> SolrInputField.getValue(). In a normal unit test this yields me a String. But 
> in a distributed test i get a ByteArrayUtf8CharSequence.
> If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
> returns null unless some internal method called _getStr first.
> This is either by design or a mistake. If it is a mistake, then the fix is to 
> use toString() and the getStringOrNull() method can be removed (it would 
> become a duplicate for toString(). If it is by design, then nothing is 
> obvious from the JavaDoc and it should clarify.
> This is since 7.7.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13241) Add "autoscaling" tool to the Windows script

2019-02-18 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-13241.

Resolution: Fixed

> Add "autoscaling" tool to the Windows script
> 
>
> Key: SOLR-13241
> URL: https://issues.apache.org/jira/browse/SOLR-13241
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Jason Gerlowski
>Priority: Minor
> Attachments: SOLR-13241.patch
>
>
> SOLR-13155 added a command-line tool for testing autoscaling configurations. 
> The tool can be accessed by Unix {{bin/solr}} script but it's not integrated 
> with the Windows {{bin\solr.cmd}} script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13249) ByteArrayUtf8CharSequence.getStringOrNull returns null

2019-02-18 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771277#comment-16771277
 ] 

Mark Miller commented on SOLR-13249:


Should we revert this change? Seems like it needs more time to bake.

> ByteArrayUtf8CharSequence.getStringOrNull returns null 
> ---
>
> Key: SOLR-13249
> URL: https://issues.apache.org/jira/browse/SOLR-13249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 8.0
>
>
> I have an URP that, in processAdd(), gets a field value via 
> SolrInputField.getValue(). In a normal unit test this yields me a String. But 
> in a distributed test i get a ByteArrayUtf8CharSequence.
> If it is a ByteArrayUtf8CharSequence the getStringOrNull() method always 
> returns null unless some internal method called _getStr first.
> This is either by design or a mistake. If it is a mistake, then the fix is to 
> use toString() and the getStringOrNull() method can be removed (it would 
> become a duplicate for toString(). If it is by design, then nothing is 
> obvious from the JavaDoc and it should clarify.
> This is since 7.7.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8698) Fix replaceIgnoreCase method bug in EscapeQuerySyntaxImpl

2019-02-18 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim updated LUCENE-8698:
---
Attachment: LUCENE-8698.patch

> Fix replaceIgnoreCase method bug in EscapeQuerySyntaxImpl
> -
>
> Key: LUCENE-8698
> URL: https://issues.apache.org/jira/browse/LUCENE-8698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Namgyu Kim
>Priority: Major
> Attachments: LUCENE-8698.patch
>
>
> It is a patch of LUCENE-8572 issue from [~tonicava].
>  
> There is a serious bug in the replaceIgnoreCase method of the 
> EscapeQuerySyntaxImpl class.
> This issue can affect QueryNode. (StringIndexOutOfBoundsException)
> As I mentioned in comment of the issue, the String#toLowerCase() causes the 
> array to grow in size.
> {code:java}
> private static CharSequence replaceIgnoreCase(CharSequence string,
> CharSequence sequence1, CharSequence escapeChar, Locale locale) {
>   // string = "İpone " [304, 112, 111, 110, 101, 32],  size = 6
>   ...
>   while (start < count) {
> // Convert by toLowerCase as follows.
> // string = "i'̇pone " [105, 775, 112, 111, 110, 101, 32], size = 7
> // firstIndex will be set 6.
> if ((firstIndex = string.toString().toLowerCase(locale).indexOf(first,
> start)) == -1)
>   break;
> boolean found = true;
> ...
> if (found) {
>   // In this line, String.toString() will only have a range of 0 to 5.
>   // So here we get a StringIndexOutOfBoundsException.
>   result.append(string.toString().substring(copyStart, firstIndex));
>   ...
> } else {
>   start = firstIndex + 1;
> }
>   }
>   ...
> }{code}
> Maintaining the overall structure and fixing bug is very simple.
> If we change to the following code, the method works fine.
>  
> {code:java}
> // Line 135 ~ 136
> // BEFORE
> if ((firstIndex = string.toString().toLowerCase(locale).indexOf(first, 
> start)) == -1)
> // AFTER
> if ((firstIndex = string.toString().indexOf(first, start)) == -1)
> {code}
>  
>  
> But I wonder if this is the best way.
> How do you think about using String#replace() instead?
>  
> {code:java}
> // SAMPLE : escapeWhiteChar (escapeChar and escapeQuoted are same)
> // BEFORE
> private static final CharSequence escapeWhiteChar(CharSequence str,
> Locale locale) {
>   ...
>   for (int i = 0; i < escapableWhiteChars.length; i++) {
> buffer = replaceIgnoreCase(buffer, 
> escapableWhiteChars[i].toLowerCase(locale),
> "\\", locale);
>   }
>   ...
> }
> // AFTER
> private static final CharSequence escapeWhiteChar(CharSequence str,
> Locale locale) {
>   ...
>   for (int i = 0; i < escapableWhiteChars.length; i++) {
> buffer = buffer.toString().replace(escapableWhiteChars[i], "\\" + 
> escapableWhiteChars[i]);
>   }
>   ...
> }
> {code}
>  
> First, I upload the patch using String#replace().
> If you give me some feedback, I will check it :D
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8698) Fix replaceIgnoreCase method bug in EscapeQuerySyntaxImpl

2019-02-18 Thread Namgyu Kim (JIRA)
Namgyu Kim created LUCENE-8698:
--

 Summary: Fix replaceIgnoreCase method bug in EscapeQuerySyntaxImpl
 Key: LUCENE-8698
 URL: https://issues.apache.org/jira/browse/LUCENE-8698
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Reporter: Namgyu Kim


It is a patch of LUCENE-8572 issue from [~tonicava].

 

There is a serious bug in the replaceIgnoreCase method of the 
EscapeQuerySyntaxImpl class.

This issue can affect QueryNode. (StringIndexOutOfBoundsException)

As I mentioned in comment of the issue, the String#toLowerCase() causes the 
array to grow in size.
{code:java}
private static CharSequence replaceIgnoreCase(CharSequence string,
CharSequence sequence1, CharSequence escapeChar, Locale locale) {
  // string = "İpone " [304, 112, 111, 110, 101, 32],  size = 6
  ...
  while (start < count) {
// Convert by toLowerCase as follows.
// string = "i'̇pone " [105, 775, 112, 111, 110, 101, 32], size = 7
// firstIndex will be set 6.
if ((firstIndex = string.toString().toLowerCase(locale).indexOf(first,
start)) == -1)
  break;
boolean found = true;
...
if (found) {
  // In this line, String.toString() will only have a range of 0 to 5.
  // So here we get a StringIndexOutOfBoundsException.
  result.append(string.toString().substring(copyStart, firstIndex));
  ...
} else {
  start = firstIndex + 1;
}
  }
  ...
}{code}
Maintaining the overall structure and fixing bug is very simple.

If we change to the following code, the method works fine.

 
{code:java}
// Line 135 ~ 136
// BEFORE
if ((firstIndex = string.toString().toLowerCase(locale).indexOf(first, start)) 
== -1)

// AFTER
if ((firstIndex = string.toString().indexOf(first, start)) == -1)
{code}
 

 

But I wonder if this is the best way.

How do you think about using String#replace() instead?

 
{code:java}
// SAMPLE : escapeWhiteChar (escapeChar and escapeQuoted are same)
// BEFORE
private static final CharSequence escapeWhiteChar(CharSequence str,
Locale locale) {
  ...
  for (int i = 0; i < escapableWhiteChars.length; i++) {
buffer = replaceIgnoreCase(buffer, 
escapableWhiteChars[i].toLowerCase(locale),
"\\", locale);
  }
  ...
}

// AFTER
private static final CharSequence escapeWhiteChar(CharSequence str,
Locale locale) {
  ...
  for (int i = 0; i < escapableWhiteChars.length; i++) {
buffer = buffer.toString().replace(escapableWhiteChars[i], "\\" + 
escapableWhiteChars[i]);
  }
  ...
}
{code}
 

First, I upload the patch using String#replace().
If you give me some feedback, I will check it :D

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13255) LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin

2019-02-18 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771196#comment-16771196
 ] 

Jason Gerlowski commented on SOLR-13255:


bq. it would be great to have a proper upgrade note as part of the release notes

Hey [~ahubold], I'm working on "Upgrade Notes" for users for the next release 
of our ref-guide, and I wanted them to include this issue.  I included a short 
paragraph over on SOLR-13256.  Since you mentioned you were interested in 
seeing this get documented, I wanted to give you a heads up.  Feel free to 
chime in over there about anything I got wrong or any suggestions you might 
have.

> LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin
> --
>
> Key: SOLR-13255
> URL: https://issues.apache.org/jira/browse/SOLR-13255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 7.7
>Reporter: Andreas Hubold
>Priority: Major
> Fix For: 8.0, 7.7.1
>
> Attachments: SOLR-13255.patch
>
>
> 7.7 changed the object type of string field values that are passed to 
> UpdateRequestProcessor implementations from java.lang.String to 
> ByteArrayUtf8CharSequence. SOLR-12992 was mentioned on solr-user as cause.
> The LangDetectLanguageIdentifierUpdateProcessor still expects String values, 
> does not work for CharSequences, and logs warnings instead. For example:
> {noformat}
> 2019-02-14 13:14:47.537 WARN  (qtp802600647-19) [   x:studio] 
> o.a.s.u.p.LangDetectLanguageIdentifierUpdateProcessor Field name_tokenized 
> not a String value, not including in detection
> {noformat}
> I'm not sure, but there could be further places where the changed type for 
> string values needs to be handled. (Our custom UpdateRequestProcessor are 
> broken as well since 7.7 and it would be great to have a proper upgrade note 
> as part of the release notes)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771217#comment-16771217
 ] 

Shalin Shekhar Mangar commented on SOLR-13248:
--

I'm finishing a test run. I'll ping you once I commit. It should be good to go 
once committed.

> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13256) Ref Guide: Upgrade Notes for 7.7

2019-02-18 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-13256:
---
Attachment: SOLR-13256.patch

> Ref Guide: Upgrade Notes for 7.7
> 
>
> Key: SOLR-13256
> URL: https://issues.apache.org/jira/browse/SOLR-13256
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13256.patch
>
>
> With 7.7 released and out the door, we should get the ball moving on a 7.7 
> ref-guide.  One of the prerequisites for that process is putting together 
> some upgrade notes that can go in 
> {{solr/solr-ref-guide/src/solr-upgrade-notes.adoc}} for users upgrading to 
> 7.7.
> I'm going to take a look at CHANGES and take a first pass at the "upgrading" 
> section for 7.7.  If anyone has anything they know should be in the list, 
> please let me know and I'll try to include it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13255) LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin

2019-02-18 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771196#comment-16771196
 ] 

Jason Gerlowski edited comment on SOLR-13255 at 2/18/19 4:24 PM:
-

bq. it would be great to have a proper upgrade note as part of the release notes

Hey [~ahubold], I'm working on "Upgrade Notes" for the next release of our 
ref-guide, and I wanted them to include this issue.  I included a short 
paragraph over on SOLR-13256.  Since you mentioned you were interested in 
seeing this get documented, I wanted to give you a heads up.  Feel free to 
chime in over there about anything I got wrong or any suggestions you might 
have.


was (Author: gerlowskija):
bq. it would be great to have a proper upgrade note as part of the release notes

Hey [~ahubold], I'm working on "Upgrade Notes" for users for the next release 
of our ref-guide, and I wanted them to include this issue.  I included a short 
paragraph over on SOLR-13256.  Since you mentioned you were interested in 
seeing this get documented, I wanted to give you a heads up.  Feel free to 
chime in over there about anything I got wrong or any suggestions you might 
have.

> LanguageIdentifierUpdateProcessor broken for documents sent with SolrJ/javabin
> --
>
> Key: SOLR-13255
> URL: https://issues.apache.org/jira/browse/SOLR-13255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 7.7
>Reporter: Andreas Hubold
>Priority: Major
> Fix For: 8.0, 7.7.1
>
> Attachments: SOLR-13255.patch
>
>
> 7.7 changed the object type of string field values that are passed to 
> UpdateRequestProcessor implementations from java.lang.String to 
> ByteArrayUtf8CharSequence. SOLR-12992 was mentioned on solr-user as cause.
> The LangDetectLanguageIdentifierUpdateProcessor still expects String values, 
> does not work for CharSequences, and logs warnings instead. For example:
> {noformat}
> 2019-02-14 13:14:47.537 WARN  (qtp802600647-19) [   x:studio] 
> o.a.s.u.p.LangDetectLanguageIdentifierUpdateProcessor Field name_tokenized 
> not a String value, not including in detection
> {noformat}
> I'm not sure, but there could be further places where the changed type for 
> string values needs to be handled. (Our custom UpdateRequestProcessor are 
> broken as well since 7.7 and it would be great to have a proper upgrade note 
> as part of the release notes)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 23699 - Unstable!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23699/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:+UseCompressedOops 
-XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([51C6DA4D76EFD894:987398E37F881E61]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testEventQueue(TestSimTriggerIntegration.java:757)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testSearchRate

Error Message:
The trigger did not start in time

Stack Trace:
java.lang.AssertionError: The trigger did not start in time
at 
__randomizedtesting.SeedInfo.seed([51C6

[GitHub] gerlowskija commented on a change in pull request #575: SOLR-13235: Split Collections API Ref Guide page

2019-02-18 Thread GitBox
gerlowskija commented on a change in pull request #575: SOLR-13235: Split 
Collections API Ref Guide page
URL: https://github.com/apache/lucene-solr/pull/575#discussion_r257728062
 
 

 ##
 File path: solr/solr-ref-guide/src/cluster-node-management.adoc
 ##
 @@ -0,0 +1,496 @@
+= Cluster and Node Management Commands
+:page-tocclass: right
+:page-toclevels: 1
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+A cluster is a set of Solr nodes operating in coordination with each other.
+
+These API commands work with a SolrCloud cluster at the entire cluster level, 
or on individual nodes.
+
+[[clusterstatus]]
+== CLUSTERSTATUS: Cluster Status
+
+Fetch the cluster status including collections, shards, replicas, 
configuration name as well as collection aliases and cluster properties.
+
+`/admin/collections?action=CLUSTERSTATUS`
+
+=== CLUSTERSTATUS Parameters
+
+`collection`::
+The collection name for which information is requested. If omitted, 
information on all collections in the cluster will be returned.
+
+`shard`::
+The shard(s) for which information is requested. Multiple shard names can be 
specified as a comma-separated list.
+
+`\_route_`::
+This can be used if you need the details of the shard where a particular 
document belongs to and you don't know which shard it falls under.
+
+=== CLUSTERSTATUS Response
+
+The response will include the status of the request and the status of the 
cluster.
+
+=== Examples using CLUSTERSTATUS
+
+*Input*
+
+[source,text]
+
+http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS
+
+
+*Output*
+
+[source,json]
+
+{
+  "responseHeader":{
+"status":0,
+"QTime":333},
+  "cluster":{
+"collections":{
+  "collection1":{
+"shards":{
+  "shard1":{
+"range":"8000-",
+"state":"active",
+"replicas":{
+  "core_node1":{
+"state":"active",
+"core":"collection1",
+"node_name":"127.0.1.1:8983_solr",
+"base_url":"http://127.0.1.1:8983/solr";,
+"leader":"true"},
+  "core_node3":{
+"state":"active",
+"core":"collection1",
+"node_name":"127.0.1.1:8900_solr",
+"base_url":"http://127.0.1.1:8900/solr"}}},
+  "shard2":{
+"range":"0-7fff",
+"state":"active",
+"replicas":{
+  "core_node2":{
+"state":"active",
+"core":"collection1",
+"node_name":"127.0.1.1:7574_solr",
+"base_url":"http://127.0.1.1:7574/solr";,
+"leader":"true"},
+  "core_node4":{
+"state":"active",
+"core":"collection1",
+"node_name":"127.0.1.1:7500_solr",
+"base_url":"http://127.0.1.1:7500/solr",
+"maxShardsPerNode":"1",
+"router":{"name":"compositeId"},
+"replicationFactor":"1",
+"znodeVersion": 11,
+"autoCreated":"true",
+"configName" : "my_config",
+"aliases":["both_collections"]
+  },
+  "collection2":{
+"..."
+  }
+},
+"aliases":{ "both_collections":"collection1,collection2" },
+"roles":{
+  "overseer":[
+"127.0.1.1:8983_solr",
+"127.0.1.1:7574_solr"]
+},
+"live_nodes":[
+  "127.0.1.1:7574_solr",
+  "127.0.1.1:7500_solr",
+  "127.0.1.1:8983_solr",
+  "127.0.1.1:8900_solr"]
+  }
+}
+
+
+
+[[clusterprop]]
+== CLUSTERPROP: Cluster Properties
+
+Add, edit or delete a cluster-wide property.
+
+`/admin/collections?action=CLUSTERPROP&name=_propertyName_&val=_propertyValue_`
+
+=== CLUSTERPROP Parameters
+
+`name`::
+The name of the property. Supported properties names are `urlScheme` and 
`autoAddReplicas and location`. Other names are rejected with an error.
 
 Review comment:
   [0] Missing opening/closing backticks after `autoAddReplicas` and `location`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL 

[GitHub] gerlowskija commented on a change in pull request #575: SOLR-13235: Split Collections API Ref Guide page

2019-02-18 Thread GitBox
gerlowskija commented on a change in pull request #575: SOLR-13235: Split 
Collections API Ref Guide page
URL: https://github.com/apache/lucene-solr/pull/575#discussion_r257733502
 
 

 ##
 File path: solr/solr-ref-guide/src/collection-management.adoc
 ##
 @@ -0,0 +1,752 @@
+= Collection Management Commands
+:page-tocclass: right
+:page-toclevels: 1
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+A collection is a single logical index that uses a single Solr configuration 
file (`solrconfig.xml`) and a single index schema.
+
+[[create]]
+== CREATE: Create a Collection
+
+`/admin/collections?action=CREATE&name=_name_`
+
+=== CREATE Parameters
+
+The CREATE action allows the following parameters:
+
+`name`::
+The name of the collection to be created. This parameter is required.
+
+`router.name`::
+The router name that will be used. The router defines how documents will be 
distributed among the shards. Possible values are `implicit` or `compositeId`, 
which is the default.
++
+The `implicit` router does not automatically route documents to different 
shards. Whichever shard you indicate on the indexing request (or within each 
document) will be used as the destination for those documents.
++
+The `compositeId` router hashes the value in the uniqueKey field and looks up 
that hash in the collection's clusterstate to determine which shard will 
receive the document, with the additional ability to manually direct the 
routing.
++
+When using the `implicit` router, the `shards` parameter is required. When 
using the `compositeId` router, the `numShards` parameter is required.
++
+For more information, see also the section 
<>.
+
+`numShards`::
+The number of shards to be created as part of the collection. This is a 
required parameter when the `router.name` is `compositeId`.
+
+`shards`::
+A comma separated list of shard names, e.g., `shard-x,shard-y,shard-z`. This 
is a required parameter when the `router.name` is `implicit`.
+
+`replicationFactor`::
+The number of replicas to be created for each shard. The default is `1`.
++
+This will create a NRT type of replica. If you want another type of replica, 
see the `tlogReplicas` and `pullReplica` parameters below. See the section 
<> for more information about replica types.
+
+`nrtReplicas`::
+The number of NRT (Near-Real-Time) replicas to create for this collection. 
This type of replica maintains a transaction log and updates its index locally. 
If you want all of your replicas to be of this type, you can simply use 
`replicationFactor` instead.
+
+`tlogReplicas`::
+The number of TLOG replicas to create for this collection. This type of 
replica maintains a transaction log but only updates its index via replication 
from a leader. See the section 
<> for more information about replica types.
+
+`pullReplicas`::
+The number of PULL replicas to create for this collection. This type of 
replica does not maintain a transaction log and only updates its index via 
replication from a leader. This type is not eligible to become a leader and 
should not be the only type of replicas in the collection. See the section 
<> for more information about replica types.
+
+`maxShardsPerNode`::
+When creating collections, the shards and/or replicas are spread across all 
available (i.e., live) nodes, and two replicas of the same shard will never be 
on the same node.
++
+If a node is not live when the CREATE action is called, it will not get any 
parts of the new collection, which could lead to too many replicas being 
created on a single live node. Defining `maxShardsPerNode` sets a limit on the 
number of replicas the CREATE action will spread to each node.
++
+If the entire collection can not be fit into the live nodes, no collection 
will be created at all. The default `maxShardsPerNode` value is `1`.
+
+`createNodeSet`::
+Allows defining the nodes to spread the new collection across. The format is a 
comma-separated list of node_names, such as 
`localhost:8983_solr,localhost:8984_solr,localhost:8985_solr`.
++
+If not provided, the CREATE operation will create shard-replicas spread across 
all live Solr nodes.
++
+Alternatively, use the special value of `EMPTY` to initially create no 
shard-replica with

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11) - Build # 7734 - Still Unstable!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7734/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [MMapDirectory, 
MMapDirectory, SolrCore, MMapDirectory, MMapDirectory, InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:778)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:975)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:882)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:164)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:305)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at java.base/java.lang.Thread.run(Thread.java:834)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:516)  
at org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:882)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdm

[jira] [Commented] (LUCENE-8695) Word delimiter graph or span queries bug

2019-02-18 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771144#comment-16771144
 ] 

Michael Gibney commented on LUCENE-8695:


I'll second [~khitrin]; if you're interested, I have pushed a branch that 
attempts to address this issue (linked to from 
[LUCENE-7398|https://issues.apache.org/jira/browse/LUCENE-7398#comment-16630529])
 ... feedback/testing welcome!

Regarding storing positionLength in the index -- would there be any interest in 
revisiting this possibility 
([LUCENE-4312|https://issues.apache.org/jira/browse/LUCENE-4312])? The 
branch/patch referenced above currently records positionLength in Payloads.

> Word delimiter graph or span queries bug
> 
>
> Key: LUCENE-8695
> URL: https://issues.apache.org/jira/browse/LUCENE-8695
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.7
>Reporter: Pawel Rog
>Priority: Major
>
> I have a simple query phrase query and a token stream which uses word 
> delimiter graph which fails to match. I tried different configurations of 
> word delimiter graph but could find a good solution for this. I don't 
> actually know if the problem is on word delimiter side or maybe on span 
> queries side.
> Query which is generated:
> {code:java}
>  spanNear([field:added, spanOr([field:foobarbaz, spanNear([field:foo, 
> field:bar, field:baz], 0, true)]), field:entry], 0, true)
> {code}
>  
> Code of test where I isolated the problem is attached below:
> {code:java}
> public class TestPhrase extends LuceneTestCase {
>   private static IndexSearcher searcher;
>   private static IndexReader reader;
>   private Query query;
>   private static Directory directory;
>   private static Analyzer searchAnalyzer = new Analyzer() {
> @Override
> public TokenStreamComponents createComponents(String fieldName) {
>   Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, 
> false);
>   TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
> WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
>   WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
>   WordDelimiterGraphFilter.CATENATE_WORDS |
>   WordDelimiterGraphFilter.CATENATE_NUMBERS |
>   WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
>   CharArraySet.EMPTY_SET);
>   TokenFilter filter2 = new LowerCaseFilter(filter1);
>   return new TokenStreamComponents(tokenizer, filter2);
> }
>   };
>   private static Analyzer indexAnalyzer = new Analyzer() {
> @Override
> public TokenStreamComponents createComponents(String fieldName) {
>   Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, 
> false);
>   TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
> WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
>   WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
>   WordDelimiterGraphFilter.GENERATE_NUMBER_PARTS |
>   WordDelimiterGraphFilter.CATENATE_WORDS |
>   WordDelimiterGraphFilter.CATENATE_NUMBERS |
>   WordDelimiterGraphFilter.PRESERVE_ORIGINAL |
>   WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
>   CharArraySet.EMPTY_SET);
>   TokenFilter filter2 = new LowerCaseFilter(filter1);
>   return new TokenStreamComponents(tokenizer, filter2);
> }
> @Override
> public int getPositionIncrementGap(String fieldName) {
>   return 100;
> }
>   };
>   @BeforeClass
>   public static void beforeClass() throws Exception {
> directory = newDirectory();
> RandomIndexWriter writer = new RandomIndexWriter(random(), directory, 
> indexAnalyzer);
> Document doc = new Document();
> doc.add(newTextField("field", "Added FooBarBaz entry", Field.Store.YES));
> writer.addDocument(doc);
> reader = writer.getReader();
> writer.close();
> searcher = new IndexSearcher(reader);
>   }
>   @Override
>   public void setUp() throws Exception {
> super.setUp();
>   }
>   @AfterClass
>   public static void afterClass() throws Exception {
> searcher = null;
> reader.close();
> reader = null;
> directory.close();
> directory = null;
>   }
>   public void testSearch() throws Exception {
> QueryParser parser = new QueryParser("field", searchAnalyzer);
> query = parser.parse("\"Added FooBarBaz entry\"");
> System.out.println(query);
> ScoreDoc[] hits = searcher.search(query, 1000).scoreDocs;
> assertEquals(1, hits.length);
>   }
> }
> {code}
>  
>  
> NOTE: I tested it on Lucene 7.1.0, 7.4.0 and 7.7.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache

[jira] [Commented] (SOLR-13248) Autoscaling based replica placement is broken out of the box

2019-02-18 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771126#comment-16771126
 ] 

Alan Woodward commented on SOLR-13248:
--

Is this ready to be committed?  And am I OK to start building the 8.0 release 
immediately once it's in, or do we want to give it 24 hours to bake?

> Autoscaling based replica placement is broken out of the box
> 
>
> Key: SOLR-13248
> URL: https://issues.apache.org/jira/browse/SOLR-13248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6, 7.7
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-13248-withDefaultCollectionProp.patch, 
> SOLR-13248.patch, SOLR-13248.patch, SOLR-13248.patch
>
>
> SOLR-12739 made autoscaling as the default replica placement strategy. 
> However in the absence of SOLR-12845, replicas can be placed without any 
> regards for maxShardsPerNode causing multiple replicas of the same shard to 
> be placed on the same node together. Also it was reported in SOLR-13247 that 
> createNodeSet is not being respected as well.
> SOLR-13159 was an early signal of the problem but it was not reproducible and 
> there was a DNS problem in the cluster too so the root cause was not clear 
> then.
> I am creating this blocker issue because as it stands today, we cannot 
> guarantee the layout of new collections. At a minimum, we should revert to 
> using the legacy replica assignment policy or add default policies with 
> SOLR-12845 and have createNodeSet work. Related but not mandatory would be to 
> fix SOLR-12847 as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8697) GraphTokenStreamFiniteStrings does not correctly handle gaps in the token graph

2019-02-18 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771123#comment-16771123
 ] 

Alan Woodward commented on LUCENE-8697:
---

Patch with fix and some illustrative tests.  It also fixes an already-existing 
test that had some incorrect assumptions.

> GraphTokenStreamFiniteStrings does not correctly handle gaps in the token 
> graph
> ---
>
> Key: LUCENE-8697
> URL: https://issues.apache.org/jira/browse/LUCENE-8697
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8697.patch
>
>
> Currently, side-paths with gaps in can end up being missed entirely when 
> iterating through token streams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8697) GraphTokenStreamFiniteStrings does not correctly handle gaps in the token graph

2019-02-18 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8697:
--
Attachment: LUCENE-8697.patch

> GraphTokenStreamFiniteStrings does not correctly handle gaps in the token 
> graph
> ---
>
> Key: LUCENE-8697
> URL: https://issues.apache.org/jira/browse/LUCENE-8697
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8697.patch
>
>
> Currently, side-paths with gaps in can end up being missed entirely when 
> iterating through token streams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8697) GraphTokenStreamFiniteStrings does not correctly handle gaps in the token graph

2019-02-18 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-8697:
-

 Summary: GraphTokenStreamFiniteStrings does not correctly handle 
gaps in the token graph
 Key: LUCENE-8697
 URL: https://issues.apache.org/jira/browse/LUCENE-8697
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Alan Woodward
Assignee: Alan Woodward


Currently, side-paths with gaps in can end up being missed entirely when 
iterating through token streams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JDK 12: First Release Candidate available

2019-02-18 Thread Rory O'Donnell

Thanks Uwe!

On 18/02/2019 13:39, Uwe Schindler wrote:


Hi,

it’s installed already (both JDK 12 and JDK 13). We also made progress 
with the recent SIGSEGV issue in Apache Lucene/Solr, with fastdebug 
builds we were able to trigger an assertion failure.


Uwe

*From:*Rory O'Donnell 
*Sent:* Monday, February 18, 2019 12:09 PM
*To:* dawid.we...@cs.put.poznan.pl; uwe.h.schind...@gmail.com
*Cc:* rory.odonn...@oracle.com; Dalibor Topic 
; Balchandra Vaidya 
; Muneer Kolarkunnu 
; dev@lucene.apache.org

*Subject:* JDK 12: First Release Candidate available

  Hi Uwe & Dawid,

*OpenJDK builds  - JDK 12 Early Access build 32 is now available at : 
- jdk.java.net/12/*

*JDK 12:  First Release Candidate [1]*

  * Per the JDK 12 schedule [2], we are now in Release Candidate Phase.
  * The stabilization repository, jdk/jdk12, is open for P1 bug fixes
per the JDK Release Process (JEP 3) [3].
  * All changes require approval via the Fix-Request Process [4].
  * Release note additions since last email

  o Build 31 - can_pop_frame and can_force_early_return
capabilities are disabled if JVMCI compiler is used
(JDK-8218025
) The JVMTI
|can_pop_frame| and |can_force_early_return| capabilities are
disabled if a JVMCI compiler (like Graal) is used. As a result
the corresponding functionality (|PopFrame| and
|ForceEarlyReturnXXX| functions) is not available to JVMTI
agents. This issue is being fixed via JDK-8218885

[https://bugs.openjdk.java.net/browse/JDK-8218885
].
  o Build 28: JDK-8212233
  : javadoc
fails on jdk12 with "The code being documented uses modules
but the packages defined in $URL are in the unnamed module."

  * Changes in this build.



*OpenJDK builds  - JDK 13 Early Access build 8 is now available at : - 
jdk.java.net/13/*


  * These early-access, open-source builds are provided under the

  o GNU General Public License, version 2, with the Classpath
Exception .

  * Release Notes updates
  * Build 8

  o GraphicsEnvironment.getCenterPoint()/getMaximumWindowBounds()
are unified across the platforms (JDK-8214918
)
  o The experimental FIPS 140 compliant mode has been removed from
the SunJSSE provider. (JDK-8217835
)

  * Build 7

  o Change DOM parser to not resolve EntityReference and add Text
node with
DocumentBuilderFactory.setExpandEntityReferences(false)
(JDK-8206132 )

  * Build 6

  o Base64.Encoder and Base64.Decoder methods can throw
OutOfMemoryError (JDK-8210583
)

  * Changes in this build


  * FOSS Bugs fixed in recent builds

  o Build 6 : JDK-8216970
 : condy
causes JVM crash
  o Build 7: JDK-8215577
 : Remove
javadoc support for HTML 4

Rgds,Rory

[1] 
https://mail.openjdk.java.net/pipermail/jdk-dev/2019-February/002623.html

[2] http://openjdk.java.net/projects/jdk/12/#Schedule
[3] http://openjdk.java.net/jeps/3
[4] http://openjdk.java.net/jeps/3#Fix-Request-Process

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland



RE: JDK 12: First Release Candidate available

2019-02-18 Thread Uwe Schindler
Hi,

 

it’s installed already (both JDK 12 and JDK 13). We also made progress with the 
recent SIGSEGV issue in Apache Lucene/Solr, with fastdebug builds we were able 
to trigger an assertion failure.

 

Uwe

 

 

From: Rory O'Donnell  
Sent: Monday, February 18, 2019 12:09 PM
To: dawid.we...@cs.put.poznan.pl; uwe.h.schind...@gmail.com
Cc: rory.odonn...@oracle.com; Dalibor Topic ; 
Balchandra Vaidya ; Muneer Kolarkunnu 
; dev@lucene.apache.org
Subject: JDK 12: First Release Candidate available

 

  Hi Uwe & Dawid,  

OpenJDK builds  - JDK 12 Early Access build 32 is now available at : - 
jdk.java.net/12/
JDK 12:  First Release Candidate [1] 

*   Per the JDK 12 schedule [2], we are now in Release Candidate Phase.
*   The stabilization repository, jdk/jdk12, is open for P1 bug fixes per 
the JDK Release Process (JEP 3) [3]. 
*   All changes require approval via the Fix-Request Process [4].
*   Release note additions since last email 

*   Build 31 - can_pop_frame and can_force_early_return capabilities are 
disabled if JVMCI compiler is used (JDK-8218025 
 ) The JVMTI can_pop_frame 
and can_force_early_return capabilities are disabled if a JVMCI compiler (like 
Graal) is used. As a result the corresponding functionality (PopFrame and 
ForceEarlyReturnXXX functions) is not available to JVMTI agents. This issue is 
being fixed via JDK-8218885   
[https://bugs.openjdk.java.net/browse/JDK-8218885 
 ]. 
*   Build 28: JDK-8212233 
   : javadoc fails on jdk12 
with "The code being documented uses modules but the packages defined in $URL 
are in the unnamed module."

*   Changes in this build. 

 

OpenJDK builds  - JDK 13 Early Access build 8 is now available at : - 
jdk.java.net/13/

*   These early-access, open-source builds are provided under the 

*   GNU General Public License, version 
  2, with the Classpath Exception.

*   Release Notes updates
*   Build 8

*   GraphicsEnvironment.getCenterPoint()/getMaximumWindowBounds() are 
unified across the platforms (JDK-8214918 
 )
*   The experimental FIPS 140 compliant mode has been removed from the 
SunJSSE provider. (JDK-8217835 
 )

*   Build 7

*   Change DOM parser to not resolve EntityReference and add Text node with 
DocumentBuilderFactory.setExpandEntityReferences(false) (JDK-8206132 
 )

*   Build 6

*   Base64.Encoder and Base64.Decoder methods can throw OutOfMemoryError 
(JDK-8210583  )

*   Changes in this build 

 
*   FOSS Bugs fixed in recent builds

*   Build 6 : JDK-8216970 
  : condy causes JVM crash
*   Build 7: JDK-8215577  
 : Remove javadoc support for HTML 4 

 

Rgds,Rory 

[1] https://mail.openjdk.java.net/pipermail/jdk-dev/2019-February/002623.html
[2] http://openjdk.java.net/projects/jdk/12/#Schedule
[3] http://openjdk.java.net/jeps/3
[4] http://openjdk.java.net/jeps/3#Fix-Request-Process



-- 
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland 


[jira] [Commented] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-02-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771078#comment-16771078
 ] 

Jan Høydahl commented on SOLR-11876:


Excellent, thanks Ishan! DRY is better :) 

> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.x, master (9.0), 7.7.1
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.p

[JENKINS-EA] Lucene-Solr-BadApples-8.x-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 18 - Unstable!

2019-02-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-8.x-Linux/18/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testSearchRate

Error Message:
last ClusterState: znodeVersion: 6 live nodes:[127.0.0.1:10019_solr, 
127.0.0.1:10020_solr] 
collections:{collection1=DocCollection(collection1//clusterstate.json/5)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{"shard1":{   "replicas":{ 
"core_node1":{   "core":"collection1_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10019_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0},  
   "core_node2":{   "core":"collection1_shard1_replica_n2", 
  "SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10020_solr", 
  "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active", last coll state: 
DocCollection(collection1//clusterstate.json/5)={   "replicationFactor":"1",   
"pullReplicas":"0",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0",   "autoCreated":"true",   "shards":{"shard1":{   
"replicas":{ "core_node1":{   
"core":"collection1_shard1_replica_n1",   "SEARCHER.searcher.maxDoc":0, 
  "SEARCHER.searcher.deletedDocs":0,   
"INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10019_solr",
   "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0},  
   "core_node2":{   "core":"collection1_shard1_replica_n2", 
  "SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10020_solr", 
  "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.util.concurrent.TimeoutException: last ClusterState: znodeVersion: 6
live nodes:[127.0.0.1:10019_solr, 127.0.0.1:10020_solr]
collections:{collection1=DocCollection(collection1//clusterstate.json/5)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{"shard1":{
  "replicas":{
"core_node1":{
  "core":"collection1_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10019_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0},
"core_node2":{
  "core":"collection1_shard1_replica_n2",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10020_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active", last coll state: 
DocCollection(collection1//clusterstate.json/5)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{"shard1":{
  "replicas":{
"core_node1":{
  "core":"collection1_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10019_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0},
"core_node2":{
  "core":"collection1_shard1_replica_n2",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10020_solr",
  "state":"active",
  "type":"NRT",
  "

[jira] [Created] (LUCENE-8696) TestGeo3DPoint.testGeo3DRelations failure

2019-02-18 Thread Ignacio Vera (JIRA)
Ignacio Vera created LUCENE-8696:


 Summary: TestGeo3DPoint.testGeo3DRelations failure
 Key: LUCENE-8696
 URL: https://issues.apache.org/jira/browse/LUCENE-8696
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial3d
Reporter: Ignacio Vera


Reproduce with:
{code:java}
ant test  -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
-Dtests.seed=721195D0198A8470 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sr-RS -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1{code}
Error:
{code:java}
   [junit4] FAILURE 1.16s | TestGeo3DPoint.testGeo3DRelations <<<
   [junit4]    > Throwable #1: java.lang.AssertionError: invalid hits for 
shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
width=1.3439035240356338(77.01), 
points={[[lat=2.4457272005608357E-47, 
lon=0.017453291479645996([X=1.0009663787601641, Y=0.017471932090601616, 
Z=2.448463612203698E-47])], [lat=2.4457272005608357E-47, 
lon=0.8952476719156919([X=0.6260252093310985, Y=0.7812370940381473, 
Z=2.448463612203698E-47])], [lat=2.4457272005608357E-47, 
lon=0.6491968536639036([X=0.7974608400583222, Y=0.6052232384770843, 
Z=2.448463612203698E-47])], [lat=-0.7718789008737459, 
lon=0.9236607495528212([X=0.43181767034308555, Y=0.5714183775701452, 
Z=-0.6971214014446648])]]}}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods

2019-02-18 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771013#comment-16771013
 ] 

Simon Willnauer commented on LUCENE-8292:
-

[~dsmiley] I coordinated this with [~romseygeek] given that we had to respin 
for https://issues.apache.org/jira/browse/SOLR-13126 anyhow. 

> Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
> --
>
> Key: LUCENE-8292
> URL: https://issues.apache.org/jira/browse/LUCENE-8292
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.2.1
>Reporter: Bruno Roustant
>Priority: Major
> Fix For: trunk, 8.0, 8.x, master (9.0)
>
> Attachments: 
> 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, 
> LUCENE-8292.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many 
> methods.
> It misses some seekExact() methods, thus it is not possible to the delegate 
> to override these methods to have specific behavior (unlike the TermsEnum API 
> which allows that).
> The fix is straightforward: simply override these seekExact() methods and 
> delegate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8695) Word delimiter graph or span queries bug

2019-02-18 Thread Nikolay Khitrin (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771012#comment-16771012
 ] 

Nikolay Khitrin commented on LUCENE-8695:
-

I have the fully identical issue.

Unfortunately, span queries aren't working with multiterm synonyms and looks 
like it can't be properly fixed without adding positionLength attribute to 
index.

> Word delimiter graph or span queries bug
> 
>
> Key: LUCENE-8695
> URL: https://issues.apache.org/jira/browse/LUCENE-8695
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.7
>Reporter: Pawel Rog
>Priority: Major
>
> I have a simple query phrase query and a token stream which uses word 
> delimiter graph which fails to match. I tried different configurations of 
> word delimiter graph but could find a good solution for this. I don't 
> actually know if the problem is on word delimiter side or maybe on span 
> queries side.
> Query which is generated:
> {code:java}
>  spanNear([field:added, spanOr([field:foobarbaz, spanNear([field:foo, 
> field:bar, field:baz], 0, true)]), field:entry], 0, true)
> {code}
>  
> Code of test where I isolated the problem is attached below:
> {code:java}
> public class TestPhrase extends LuceneTestCase {
>   private static IndexSearcher searcher;
>   private static IndexReader reader;
>   private Query query;
>   private static Directory directory;
>   private static Analyzer searchAnalyzer = new Analyzer() {
> @Override
> public TokenStreamComponents createComponents(String fieldName) {
>   Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, 
> false);
>   TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
> WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
>   WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
>   WordDelimiterGraphFilter.CATENATE_WORDS |
>   WordDelimiterGraphFilter.CATENATE_NUMBERS |
>   WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
>   CharArraySet.EMPTY_SET);
>   TokenFilter filter2 = new LowerCaseFilter(filter1);
>   return new TokenStreamComponents(tokenizer, filter2);
> }
>   };
>   private static Analyzer indexAnalyzer = new Analyzer() {
> @Override
> public TokenStreamComponents createComponents(String fieldName) {
>   Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, 
> false);
>   TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
> WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
>   WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
>   WordDelimiterGraphFilter.GENERATE_NUMBER_PARTS |
>   WordDelimiterGraphFilter.CATENATE_WORDS |
>   WordDelimiterGraphFilter.CATENATE_NUMBERS |
>   WordDelimiterGraphFilter.PRESERVE_ORIGINAL |
>   WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
>   CharArraySet.EMPTY_SET);
>   TokenFilter filter2 = new LowerCaseFilter(filter1);
>   return new TokenStreamComponents(tokenizer, filter2);
> }
> @Override
> public int getPositionIncrementGap(String fieldName) {
>   return 100;
> }
>   };
>   @BeforeClass
>   public static void beforeClass() throws Exception {
> directory = newDirectory();
> RandomIndexWriter writer = new RandomIndexWriter(random(), directory, 
> indexAnalyzer);
> Document doc = new Document();
> doc.add(newTextField("field", "Added FooBarBaz entry", Field.Store.YES));
> writer.addDocument(doc);
> reader = writer.getReader();
> writer.close();
> searcher = new IndexSearcher(reader);
>   }
>   @Override
>   public void setUp() throws Exception {
> super.setUp();
>   }
>   @AfterClass
>   public static void afterClass() throws Exception {
> searcher = null;
> reader.close();
> reader = null;
> directory.close();
> directory = null;
>   }
>   public void testSearch() throws Exception {
> QueryParser parser = new QueryParser("field", searchAnalyzer);
> query = parser.parse("\"Added FooBarBaz entry\"");
> System.out.println(query);
> ScoreDoc[] hits = searcher.search(query, 1000).scoreDocs;
> assertEquals(1, hits.length);
>   }
> }
> {code}
>  
>  
> NOTE: I tested it on Lucene 7.1.0, 7.4.0 and 7.7.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8695) Word delimiter graph or span queries bug

2019-02-18 Thread Pawel Rog (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pawel Rog updated LUCENE-8695:
--
Description: 
I have a simple query phrase query and a token stream which uses word delimiter 
graph which fails to match. I tried different configurations of word delimiter 
graph but could find a good solution for this. I don't actually know if the 
problem is on word delimiter side or maybe on span queries side.

Query which is generated:
{code:java}
 spanNear([field:added, spanOr([field:foobarbaz, spanNear([field:foo, 
field:bar, field:baz], 0, true)]), field:entry], 0, true)
{code}
 

Code of test where I isolated the problem is attached below:
{code:java}
public class TestPhrase extends LuceneTestCase {

  private static IndexSearcher searcher;
  private static IndexReader reader;
  private Query query;
  private static Directory directory;

  private static Analyzer searchAnalyzer = new Analyzer() {
@Override
public TokenStreamComponents createComponents(String fieldName) {
  Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, false);
  TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
  WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
  WordDelimiterGraphFilter.CATENATE_WORDS |
  WordDelimiterGraphFilter.CATENATE_NUMBERS |
  WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
  CharArraySet.EMPTY_SET);

  TokenFilter filter2 = new LowerCaseFilter(filter1);

  return new TokenStreamComponents(tokenizer, filter2);
}
  };

  private static Analyzer indexAnalyzer = new Analyzer() {
@Override
public TokenStreamComponents createComponents(String fieldName) {
  Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, false);
  TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
  WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
  WordDelimiterGraphFilter.GENERATE_NUMBER_PARTS |
  WordDelimiterGraphFilter.CATENATE_WORDS |
  WordDelimiterGraphFilter.CATENATE_NUMBERS |
  WordDelimiterGraphFilter.PRESERVE_ORIGINAL |
  WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
  CharArraySet.EMPTY_SET);

  TokenFilter filter2 = new LowerCaseFilter(filter1);

  return new TokenStreamComponents(tokenizer, filter2);
}

@Override
public int getPositionIncrementGap(String fieldName) {
  return 100;
}
  };

  @BeforeClass
  public static void beforeClass() throws Exception {
directory = newDirectory();
RandomIndexWriter writer = new RandomIndexWriter(random(), directory, 
indexAnalyzer);

Document doc = new Document();
doc.add(newTextField("field", "Added FooBarBaz entry", Field.Store.YES));
writer.addDocument(doc);

reader = writer.getReader();
writer.close();

searcher = new IndexSearcher(reader);
  }

  @Override
  public void setUp() throws Exception {
super.setUp();
  }

  @AfterClass
  public static void afterClass() throws Exception {
searcher = null;
reader.close();
reader = null;
directory.close();
directory = null;
  }

  public void testSearch() throws Exception {
QueryParser parser = new QueryParser("field", searchAnalyzer);
query = parser.parse("\"Added FooBarBaz entry\"");
System.out.println(query);
ScoreDoc[] hits = searcher.search(query, 1000).scoreDocs;
assertEquals(1, hits.length);
  }

}
{code}
 

 

NOTE: I tested it on Lucene 7.1.0, 7.4.0 and 7.7.0

  was:
I have a simple query phrase query and a token stream which uses word delimiter 
graph which fails. I tried different configurations of word delimiter graph but 
could find a good solution for this. I don't actually know if the problem is on 
word delimiter side or 

Query which is generated:
{code:java}
 spanNear([field:added, spanOr([field:foobarbaz, spanNear([field:foo, 
field:bar, field:baz], 0, true)]), field:entry], 0, true)
{code}
 

Code of test where I isolated the problem is attached below:
{code:java}
public class TestPhrase extends LuceneTestCase {

  private static IndexSearcher searcher;
  private static IndexReader reader;
  private Query query;
  private static Directory directory;

  private static Analyzer searchAnalyzer = new Analyzer() {
@Override
public TokenStreamComponents createComponents(String fieldName) {
  Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, false);
  TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
  WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
  WordDelimiterGraphFilter.CATENATE_WORDS |
  WordDelimiterGraphFilter.CATENATE_NUMBERS |
  WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
  CharArrayS

[jira] [Created] (LUCENE-8695) Word delimiter graph or span queries bug

2019-02-18 Thread Pawel Rog (JIRA)
Pawel Rog created LUCENE-8695:
-

 Summary: Word delimiter graph or span queries bug
 Key: LUCENE-8695
 URL: https://issues.apache.org/jira/browse/LUCENE-8695
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 7.7
Reporter: Pawel Rog


I have a simple query phrase query and a token stream which uses word delimiter 
graph which fails. I tried different configurations of word delimiter graph but 
could find a good solution for this. I don't actually know if the problem is on 
word delimiter side or 

Query which is generated:
{code:java}
 spanNear([field:added, spanOr([field:foobarbaz, spanNear([field:foo, 
field:bar, field:baz], 0, true)]), field:entry], 0, true)
{code}
 

Code of test where I isolated the problem is attached below:
{code:java}
public class TestPhrase extends LuceneTestCase {

  private static IndexSearcher searcher;
  private static IndexReader reader;
  private Query query;
  private static Directory directory;

  private static Analyzer searchAnalyzer = new Analyzer() {
@Override
public TokenStreamComponents createComponents(String fieldName) {
  Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, false);
  TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
  WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
  WordDelimiterGraphFilter.CATENATE_WORDS |
  WordDelimiterGraphFilter.CATENATE_NUMBERS |
  WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
  CharArraySet.EMPTY_SET);

  TokenFilter filter2 = new LowerCaseFilter(filter1);

  return new TokenStreamComponents(tokenizer, filter2);
}
  };

  private static Analyzer indexAnalyzer = new Analyzer() {
@Override
public TokenStreamComponents createComponents(String fieldName) {
  Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, false);
  TokenFilter filter1 = new WordDelimiterGraphFilter(tokenizer, 
WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE,
  WordDelimiterGraphFilter.GENERATE_WORD_PARTS |
  WordDelimiterGraphFilter.GENERATE_NUMBER_PARTS |
  WordDelimiterGraphFilter.CATENATE_WORDS |
  WordDelimiterGraphFilter.CATENATE_NUMBERS |
  WordDelimiterGraphFilter.PRESERVE_ORIGINAL |
  WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE,
  CharArraySet.EMPTY_SET);

  TokenFilter filter2 = new LowerCaseFilter(filter1);

  return new TokenStreamComponents(tokenizer, filter2);
}

@Override
public int getPositionIncrementGap(String fieldName) {
  return 100;
}
  };

  @BeforeClass
  public static void beforeClass() throws Exception {
directory = newDirectory();
RandomIndexWriter writer = new RandomIndexWriter(random(), directory, 
indexAnalyzer);

Document doc = new Document();
doc.add(newTextField("field", "Added FooBarBaz entry", Field.Store.YES));
writer.addDocument(doc);

reader = writer.getReader();
writer.close();

searcher = new IndexSearcher(reader);
  }

  @Override
  public void setUp() throws Exception {
super.setUp();
  }

  @AfterClass
  public static void afterClass() throws Exception {
searcher = null;
reader.close();
reader = null;
directory.close();
directory = null;
  }

  public void testSearch() throws Exception {
QueryParser parser = new QueryParser("field", searchAnalyzer);
query = parser.parse("\"Added FooBarBaz entry\"");
System.out.println(query);
ScoreDoc[] hits = searcher.search(query, 1000).scoreDocs;
assertEquals(1, hits.length);
  }

}
{code}
 

 

NOTE: I tested it on Lucene 7.1.0, 7.4.0 and 7.7.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



JDK 12: First Release Candidate available

2019-02-18 Thread Rory O'Donnell

  Hi Uwe & Dawid,

**OpenJDK builds *- JDK 12 Early Access build 32 **is now available **at 
: - jdk.java.net/12/*

**JDK 12:  First Release Candidate [1]**

 * Per the JDK 12 schedule [2], we are now in Release Candidate Phase.
 * The stabilization repository, jdk/jdk12, is open for P1 bug fixes
   per the JDK Release Process (JEP 3) [3].
 * All changes require approval via the Fix-Request Process [4].
 *
   Release note additions since last email

 o
   Build 31 - can_pop_frame and can_force_early_return capabilities
   are disabled if JVMCI compiler is used (JDK-8218025
   ) The JVMTI
   |can_pop_frame| and |can_force_early_return| capabilities are
   disabled if a JVMCI compiler (like Graal) is used. As a result
   the corresponding functionality (|PopFrame| and
   |ForceEarlyReturnXXX| functions) is not available to JVMTI
   agents. This issue is being fixed via JDK-8218885
   
   [https://bugs.openjdk.java.net/browse/JDK-8218885
   ].

 o Build 28: JDK-8212233
    : javadoc
   fails on jdk12 with "The code being documented uses modules but
   the packages defined in $URL are in the unnamed module."
 * Changes in this build.
   


**OpenJDK builds *- JDK 13 Early Access build 8 is **now available **at 
: - jdk.java.net/13/*


 * These early-access, open-source builds are provided under the
 o GNU General Public License, version 2, with the Classpath
   Exception .
 * Release Notes updates
 * Build 8
 o GraphicsEnvironment.getCenterPoint()/getMaximumWindowBounds()
   are unified across the platforms (JDK-8214918
   )
 o The experimental FIPS 140 compliant mode has been removed from
   the SunJSSE provider. (JDK-8217835
   )
 * Build 7
 o Change DOM parser to not resolve EntityReference and add Text
   node with
   DocumentBuilderFactory.setExpandEntityReferences(false)
   (JDK-8206132 )
 * Build 6
 o Base64.Encoder and Base64.Decoder methods can throw
   OutOfMemoryError (JDK-8210583
   )
 * Changes in this build
   

 * FOSS Bugs fixed in recent builds
 o Build 6 : JDK-8216970
    : condy
   causes JVM crash
 o Build 7: JDK-8215577
    : Remove
   javadoc support for HTML 4


Rgds,Rory

[1] 
https://mail.openjdk.java.net/pipermail/jdk-dev/2019-February/002623.html

[2] http://openjdk.java.net/projects/jdk/12/#Schedule
[3] http://openjdk.java.net/jeps/3
[4] http://openjdk.java.net/jeps/3#Fix-Request-Process

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Comment Edited] (SOLR-7414) CSVResponseWriter returns empty field when fl alias is combined with '*' selector

2019-02-18 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770967#comment-16770967
 ] 

Ishan Chattopadhyaya edited comment on SOLR-7414 at 2/18/19 10:31 AM:
--

I've committed this only to a branch, jira/solr-7414 ^. Sorry for the noise 
(messed up the name format of jira branches).


was (Author: ichattopadhyaya):
I've committed this only to a branch, jira/solr-7414 ^. Sorry for the noise.

> CSVResponseWriter returns empty field when fl alias is combined with '*' 
> selector
> -
>
> Key: SOLR-7414
> URL: https://issues.apache.org/jira/browse/SOLR-7414
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Reporter: Michael Lawrence
>Priority: Major
> Attachments: SOLR-7414-old.patch, SOLR-7414.patch, SOLR-7414.patch, 
> SOLR-7414.patch
>
>
> Attempting to retrieve all fields while renaming one, e.g., "inStock" to 
> "stocked" (URL below), results in CSV output that has a column for "inStock" 
> (should be "stocked"), and the column has no values. 
> steps to reproduce using 5.1...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary 
> '[{ "id" : "aaa", "bar_i" : 7, "inStock" : true }, { "id" : "bbb", "bar_i" : 
> 7, "inStock" : false }, { "id" : "ccc", "bar_i" : 7, "inStock" : true }]'
> {"responseHeader":{"status":0,"QTime":730}}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=id,stocked:inStock&wt=csv'
> id,stocked
> aaa,true
> bbb,false
> ccc,true
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=*,stocked:inStock&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=stocked:inStock,*&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7414) CSVResponseWriter returns empty field when fl alias is combined with '*' selector

2019-02-18 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770967#comment-16770967
 ] 

Ishan Chattopadhyaya commented on SOLR-7414:


I've committed this only to a branch, jira/solr-7414 ^. Sorry for the noise.

> CSVResponseWriter returns empty field when fl alias is combined with '*' 
> selector
> -
>
> Key: SOLR-7414
> URL: https://issues.apache.org/jira/browse/SOLR-7414
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Reporter: Michael Lawrence
>Priority: Major
> Attachments: SOLR-7414-old.patch, SOLR-7414.patch, SOLR-7414.patch, 
> SOLR-7414.patch
>
>
> Attempting to retrieve all fields while renaming one, e.g., "inStock" to 
> "stocked" (URL below), results in CSV output that has a column for "inStock" 
> (should be "stocked"), and the column has no values. 
> steps to reproduce using 5.1...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary 
> '[{ "id" : "aaa", "bar_i" : 7, "inStock" : true }, { "id" : "bbb", "bar_i" : 
> 7, "inStock" : false }, { "id" : "ccc", "bar_i" : 7, "inStock" : true }]'
> {"responseHeader":{"status":0,"QTime":730}}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=id,stocked:inStock&wt=csv'
> id,stocked
> aaa,true
> bbb,false
> ccc,true
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=*,stocked:inStock&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=stocked:inStock,*&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7414) CSVResponseWriter returns empty field when fl alias is combined with '*' selector

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770957#comment-16770957
 ] 

ASF subversion and git services commented on SOLR-7414:
---

Commit 6911c86c0217a50be266fad25de188d132f07127 in lucene-solr's branch 
refs/heads/SOLR-7414 from Ishan Chattopadhyaya
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6911c86 ]

SOLR-7414: Adding explicitly required fields support to CSV and XLSX response 
writers


> CSVResponseWriter returns empty field when fl alias is combined with '*' 
> selector
> -
>
> Key: SOLR-7414
> URL: https://issues.apache.org/jira/browse/SOLR-7414
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Reporter: Michael Lawrence
>Priority: Major
> Attachments: SOLR-7414-old.patch, SOLR-7414.patch, SOLR-7414.patch, 
> SOLR-7414.patch
>
>
> Attempting to retrieve all fields while renaming one, e.g., "inStock" to 
> "stocked" (URL below), results in CSV output that has a column for "inStock" 
> (should be "stocked"), and the column has no values. 
> steps to reproduce using 5.1...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary 
> '[{ "id" : "aaa", "bar_i" : 7, "inStock" : true }, { "id" : "bbb", "bar_i" : 
> 7, "inStock" : false }, { "id" : "ccc", "bar_i" : 7, "inStock" : true }]'
> {"responseHeader":{"status":0,"QTime":730}}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=id,stocked:inStock&wt=csv'
> id,stocked
> aaa,true
> bbb,false
> ccc,true
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=*,stocked:inStock&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=stocked:inStock,*&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] tokee commented on issue #577: SOLR-13260: 128 bit integer type - longlong

2019-02-18 Thread GitBox
tokee commented on issue #577: SOLR-13260: 128 bit integer type - longlong
URL: https://github.com/apache/lucene-solr/pull/577#issuecomment-464669631
 
 
   > I believe solr and lucene is unlikely to support anything longer than 128 
bits - The underlying implementation types only support a maximum of 128 bits.
   
   The underlying implementation is not set in stone, so at some point there 
could be 256 bit support, or maybe more likely efficient fixed-bits at 
arbitrary length. Due to that, I am partial to `int128`.
   
   My question is what we gain from a `longlong`/`int128`-type? It is very 
different from atomic numeric types, so it seems like it will require a lot of 
implementation and maintenance effort to support Solr functions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7414) CSVResponseWriter returns empty field when fl alias is combined with '*' selector

2019-02-18 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7414:
---
Attachment: SOLR-7414.patch

I had a look at the use of explicitly requested fields, and it looks reasonable.

However, I saw that a lot of the CSVResponseWriter code has been reused in 
XLSXResponseWriter and this wasn't handled there. I'm assuming this same 
problem would exist there as well. I've attached an updated patch (with a 
nocommit) where I've attempted to add this support there. However, I've not 
added any tests for XLSX; [~munendrasn], can you please review the same that 
and possibly add a test? It would be even better if we can refactor the common 
code for both the response writers into a common place.

> CSVResponseWriter returns empty field when fl alias is combined with '*' 
> selector
> -
>
> Key: SOLR-7414
> URL: https://issues.apache.org/jira/browse/SOLR-7414
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Reporter: Michael Lawrence
>Priority: Major
> Attachments: SOLR-7414-old.patch, SOLR-7414.patch, SOLR-7414.patch, 
> SOLR-7414.patch
>
>
> Attempting to retrieve all fields while renaming one, e.g., "inStock" to 
> "stocked" (URL below), results in CSV output that has a column for "inStock" 
> (should be "stocked"), and the column has no values. 
> steps to reproduce using 5.1...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary 
> '[{ "id" : "aaa", "bar_i" : 7, "inStock" : true }, { "id" : "bbb", "bar_i" : 
> 7, "inStock" : false }, { "id" : "ccc", "bar_i" : 7, "inStock" : true }]'
> {"responseHeader":{"status":0,"QTime":730}}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=id,stocked:inStock&wt=csv'
> id,stocked
> aaa,true
> bbb,false
> ccc,true
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=*,stocked:inStock&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7&fl=stocked:inStock,*&wt=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770887#comment-16770887
 ] 

ASF subversion and git services commented on SOLR-11876:


Commit a1efe979313431651e6da7a7baffb30d49f36feb in lucene-solr's branch 
refs/heads/branch_8x from Ishan Chattopadhyaya
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a1efe97 ]

SOLR-11876: In-place updates fail during resolution if required fields are 
present


> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.x, master (9.0), 7.7.1
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.

[jira] [Updated] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-02-18 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11876:

Attachment: SOLR-11876.patch

> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.x, master (9.0), 7.7.1
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactor

[jira] [Commented] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-02-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770885#comment-16770885
 ] 

ASF subversion and git services commented on SOLR-11876:


Commit 6a0f7b251de104d9ce1dfa6b18821715929fe76b in lucene-solr's branch 
refs/heads/master from Ishan Chattopadhyaya
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6a0f7b2 ]

SOLR-11876: In-place updates fail during resolution if required fields are 
present


> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.x, master (9.0), 7.7.1
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.jav

  1   2   >